[jira] [Created] (CASSANDRA-2363) cli sets RF to 1 when replica strategy is not specified
cli sets RF to 1 when replica strategy is not specified --- Key: CASSANDRA-2363 URL: https://issues.apache.org/jira/browse/CASSANDRA-2363 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.4 Reporter: Aaron Morton Priority: Minor Fix For: 0.7.5, 0.8 If a keyspace is created via the cli with {noformat} create keyspace dev with replication_factor = 2; {noformat} It will be created using the NetworkTopologyStrategy and default options {noformat} [default@dev] describe keyspace; Keyspace: dev: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Options: [datacenter1:1] {noformat} And the effective RF will be 1 not 2. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2363) cli sets RF to 1 when replica strategy is not specified
[ https://issues.apache.org/jira/browse/CASSANDRA-2363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Morton updated CASSANDRA-2363: Affects Version/s: (was: 0.7.4) 0.8 Fix Version/s: (was: 0.7.5) Assignee: Aaron Morton cli sets RF to 1 when replica strategy is not specified --- Key: CASSANDRA-2363 URL: https://issues.apache.org/jira/browse/CASSANDRA-2363 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.8 Reporter: Aaron Morton Assignee: Aaron Morton Priority: Minor Fix For: 0.8 If a keyspace is created via the cli with {noformat} create keyspace dev with replication_factor = 2; {noformat} It will be created using the NetworkTopologyStrategy and default options {noformat} [default@dev] describe keyspace; Keyspace: dev: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Options: [datacenter1:1] {noformat} And the effective RF will be 1 not 2. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2363) cli sets RF to 1 when replica strategy is not specified
[ https://issues.apache.org/jira/browse/CASSANDRA-2363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Morton updated CASSANDRA-2363: Attachment: 0001-use-the-cluster-RF-for-the-default-DC-RF.patch cli sets RF to 1 when replica strategy is not specified --- Key: CASSANDRA-2363 URL: https://issues.apache.org/jira/browse/CASSANDRA-2363 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.8 Reporter: Aaron Morton Assignee: Aaron Morton Priority: Minor Fix For: 0.8 Attachments: 0001-use-the-cluster-RF-for-the-default-DC-RF.patch If a keyspace is created via the cli with {noformat} create keyspace dev with replication_factor = 2; {noformat} It will be created using the NetworkTopologyStrategy and default options {noformat} [default@dev] describe keyspace; Keyspace: dev: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Options: [datacenter1:1] {noformat} And the effective RF will be 1 not 2. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Update of MemtableSSTable by JingguoYao
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MemtableSSTable page has been changed by JingguoYao. The comment on this change is: Update CFStoreMBean to ColumnFamilyStoreMBean. http://wiki.apache.org/cassandra/MemtableSSTable?action=diffrev1=15rev2=16 -- SSTables that are obsoleted by a compaction are deleted asynchronously when the JVM performs a GC. You can force a GC from jconsole if necessary, but Cassandra will force one itself if it detects that it is low on space. A compaction marker is also added to obsolete sstables so they can be deleted on startup if the server does not perform a GC before being restarted. - CFStoreMBean exposes sstable space used as getLiveDiskSpaceUsed (only includes size of non-obsolete files) and getTotalDiskSpaceUsed (includes everything). + ColumnFamilyStoreMBean exposes sstable space used as getLiveDiskSpaceUsed (only includes size of non-obsolete files) and getTotalDiskSpaceUsed (includes everything). == Further reading == (The high-level memtable/sstable design as well as the Memtable and SSTable names come from Cassandra's sections 5.3 and 5.4 of [[http://labs.google.com/papers/bigtable.html|Google's Bigtable paper]], although some of the terminology around compaction differs.)
[Cassandra Wiki] Update of MemtableSSTable_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MemtableSSTable_JP page has been changed by MakiWatanabe. The comment on this change is: Update translation to latest EN page.. http://wiki.apache.org/cassandra/MemtableSSTable_JP?action=diffrev1=8rev2=9 -- ## page was copied from MemtableSSTable + == Overview == Cassandraの書き込みはまずコミットログ([[Durability|CommitLog]])に対して行われます。そして!ColumnFamilyごとにMemtableと呼ばれる構造体に対して書き込まれます。Memtableは基本的にキーで参照可能なデータ行のライトバックキャッシュです。つまりライトスルーキャッシュと違ってSSTableとしてディスクに書き込まれる前に、Memtableが一杯になるまで書き込まれます。 + == Flushing == MemtableをSSTableへ変換するプロセスをフラッシュ(flushing)と呼びます。JMX経由で(例えばnodetoolを使用して)手動でフラッシュを実行することも可能です。コミットログのリプレイ時間を短くするためにノードを再起動する前に行った方が良いでしょう。Memtableはキーでソートされ、シーケンシャルに書き出されます。 したがって書き込みは超高速に行われます。コミットログへの追記とフラッシュ時のシーケンシャルな書き込みしかコストがかかっていません! 一旦フラッシュされると、SSTableのファイルは変更不可能になります。それ以上の書き込みはできません。したがって読み込み時には、要求されたデータを生成するためにサーバーは(潜在的にはブルームフィルタなどのトリックを使用して余計な読み込みは回避しているのですが)すべてのディスク上のSSTableとまだフラッシュされていないMemtableから行の断片を組み合わせる必要があります。 + == Compaction == - 読み込む必要があるSSTableファイルの数を制限するため、また[[DistributedDeletes_JP|使用されていないデータによって埋められているスペース]]を取り戻すために、Cassandraはコンパクションを行います。コンパクションとは複数の古いSSTableファイルをひとつの新しいファイルにマージすることです。コンパクションは少なくとも4つのSSTableがディスクにフラッシュされた場合に実行されます。4つの似たようなサイズのSSTableが1つのSSTableにマージされます。SSTableはMemtableのフラッシュ時のサイズと同じサイズから始まって、サイズが倍になりながら階層的に形作っていきます。したがって、まずMemtableと同じサイズのSSTableが4つまで作成され、次にそれらの倍のサイズで4つまで作成され、そして次にそれらの倍のサイズで4つまで作成され、というように形成されます。 + 読み込む必要があるSSTableファイルの数を制限するため、また[[DistributedDeletes_JP|使用されていないデータによって埋められているスペース]]を取り戻すために、Cassandraはコンパクションを行います。コンパクションとは複数の古いSSTableファイルをひとつの新しいファイルにマージすることです。コンパクションは少なくともN個のSSTableがディスクにフラッシュされた場合に実行されます。Nは設定可能ですがデフォルトは4です。4つの似たようなサイズのSSTableが1つのSSTableにマージされます。SSTableはMemtableのフラッシュ時のサイズと同じサイズから始まって、サイズが最大N倍になりながら階層的に形作っていきます。したがって、まずMemtableと同じサイズのSSTableがN個まで作成され、次にそれらの最大N倍のサイズで作成され、そして次にそれらの最大N倍のサイズで作成され、というように形成されます。 - マイナーコンパクションは同じようなサイズのSSTableをマージします。メジャーコンパクションはある!ColumnFamilyのすべてのSSTableをマージします。メジャーコンパクションの時だけ[[DistributedDeletes_JP|tombstone]]のついたデータの削除が行われます。 + マイナーコンパクションは同じようなサイズのSSTableをマージします。メジャーコンパクションはある!ColumnFamilyのすべてのSSTableをマージします。0.6.6/0.7.0以前ではメジャーコンパクションの時だけ[[DistributedDeletes_JP|tombstone]]のついたデータの削除が行われます。 入力元となるSSTableはすべてキーでソートされているため、マージは効率良く行われランダムI/Oを必要としません。コンパクションが終了すると古いSSTableファイルは削除されます。注意点としては、最悪の場合(上書きや削除がないデータで構成されている場合)今使用している容量の倍のディスク容量が一時的に必要になります。数TBのディスクが主流の現在はあまり問題になりませんが、警告のための閾値を設定する場合には注意しておくと良いでしょう。
[Cassandra Wiki] Trivial Update of MemtableSSTable_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MemtableSSTable_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/MemtableSSTable_JP?action=diffrev1=9rev2=10 -- ## page was copied from MemtableSSTable + ## ReSync translation to MemtableSStable#16 2011/03/22 + == Overview == Cassandraの書き込みはまずコミットログ([[Durability|CommitLog]])に対して行われます。そして!ColumnFamilyごとにMemtableと呼ばれる構造体に対して書き込まれます。Memtableは基本的にキーで参照可能なデータ行のライトバックキャッシュです。つまりライトスルーキャッシュと違ってSSTableとしてディスクに書き込まれる前に、Memtableが一杯になるまで書き込まれます。
[Cassandra Wiki] Update of FAQ_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The FAQ_JP page has been changed by MakiWatanabe. The comment on this change is: Translate large_file_and_blob_storage. http://wiki.apache.org/cassandra/FAQ_JP?action=diffrev1=67rev2=68 -- Anchor(large_file_and_blob_storage) == Cassandraで巨大なファイルやBLOBを保存できますか? == - Currently Cassandra isn't optimized specifically for large file or BLOB storage. However, files of around 64Mb and smaller can be easily stored in the database without splitting them into smaller chunks. This is primarily due to the fact that Cassandra's public API is based on Thrift, which offers no streaming abilities; any value written or fetched has to fit in to memory. Other non Thrift interfaces may solve this problem in the future, but there are currently no plans to change Thrift's behavior. When planning applications that require storing BLOBS, you should also consider these attributes of Cassandra as well: + 現在のCassandraは巨大なファイルやBlobストレージとしては最適化されていません。しかし64MB前後のファイルまでであれば分割することなく容易に格納できます。これは主にCassandraが公式APIとして使用しているThriftに起因する制約事項です。Thriftではストリーミング機能が提供されていないため、write/readされるデータはメモリに収まらなければなりません。havior. + Cassandraが将来Thrift以外のインターフェースをサポートすればこの問題を解決するかもしれませんが、今のところThrift自体の挙動を変更する計画はありません。Blobを格納するようなアプリケーションを設計する場合、以下のCassandraの特性にも注意して下さい。 - * The main limitation on a column and super column size is that all the data for a single key and column must fit (on disk) on a single machine(node) in the cluster. Because keys alone are used to determine the nodes responsible for replicating their data, the amount of data associated with a single key has this upper bound. This is an inherent limitation of the distribution model. + * 特定のキーとカラムに含まれるデータは単一ノードのディスクに収まらなければなりません。これがカラムやスーパーカラムの大きさに関する主要な制限になります。キーはレプリカ先のノードを決定するために使用されるため、特定のキーにひもづけられたデータは格納先ノードのストレージサイズによって規定されます。これはCassandraの分散モデルから派生する制限事項です。 - * When large columns are created and retrieved, that columns data is loaded into RAM which can get resource intensive quickly. Consider, loading 200 rows with columns that store 10Mb image files each into RAM. That small result set would consume about 2Gb of RAM. Clearly as more and more large columns are loaded, RAM would start to get consumed quickly. This can be worked around, but will take some upfront planning and testing to get a workable solution for most applications. You can find more information regarding this behavior here: [[MemtableThresholds|memtables]], and a possible solution in 0.7 here: [[https://issues.apache.org/jira/browse/CASSANDRA-16|CASSANDRA-16]]. + * 巨大なカラムを作成したり、読み出したりする場合、カラムのデータはRAMに読み込まれます。このようなケースでは容易にリソース集約的になりがちです。例えばそれぞれ10MBの画像ファイルを格納したカラムを含む行を200行RAMにロードすると小計2GBのRAMを使用することになります。ロードする行数が増えれば増えるほど、RAMの消費は急激に増加します。これを回避することは可能ですが、多くのアプリケーションで有効な解決方法を見いだすには事前の設計と試験が必要です。この挙動に関する情報については[[MemtableThresholds|memtables]]を参照して下さい。また0.7で使用可能な対策については次のリンクを参照してください。 [[https://issues.apache.org/jira/browse/CASSANDRA-16|CASSANDRA-16]] + * 更なる情報については[[CassandraLimitations|Cassandra Limitations]]を参照して下さい。 - * Please refer to the notes in the Cassandra limitations section for more information: [[CassandraLimitations|Cassandra Limitations]] - - 現状のCassandraは巨大なファイルやBLOBに特化した最適化は行われていませんが,対処方法はあります. - * [[CassandraLimitations_JP|Cassandraの制限]]で詳細を確認してください. Anchor(jmx_localhost_refused)
[Cassandra Wiki] Trivial Update of FrontPage_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The FrontPage_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/FrontPage_JP?action=diffrev1=77rev2=78 -- * [[RunningCassandra_JP|Cassandraを動かす]](翻訳済み) * [[ArchitectureOverview_JP|アーキテクチャオーバビュー]](翻訳中) * [[UseCases|Simple Use Cases and Solutions]] -- please help complete - * [[FAQ_JP|FAQ]](翻訳済み) + * [[FAQ_JP|FAQ]](翻訳中) * [[Counters|カウンター]] == 高度なセットアップとチューニング ==
[Cassandra Wiki] Update of MemtableSSTable_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MemtableSSTable_JP page has been changed by MakiWatanabe. The comment on this change is: Sync to EN#16: CFStoreMBean - ColumnFamilyStoreMBean. http://wiki.apache.org/cassandra/MemtableSSTable_JP?action=diffrev1=10rev2=11 -- ## page was copied from MemtableSSTable - ## ReSync translation to MemtableSStable#16 2011/03/22 == Overview == Cassandraの書き込みはまずコミットログ([[Durability|CommitLog]])に対して行われます。そして!ColumnFamilyごとにMemtableと呼ばれる構造体に対して書き込まれます。Memtableは基本的にキーで参照可能なデータ行のライトバックキャッシュです。つまりライトスルーキャッシュと違ってSSTableとしてディスクに書き込まれる前に、Memtableが一杯になるまで書き込まれます。 @@ -22, +21 @@ コンパクションによって廃棄されたSSTableはJVMがGCを行う際に非同期に削除されます。必要に応じてjconsoleからGCを実行できますが、Cassandraはディスク容量が少なくなってきたのを検知すると自動でGCを実施します。また、GCが行われずに再起動した時のために、起動時に削除されるよう廃棄されるはずのSSTableにコンパクションの印を追加しておきます。 - CFStoreMBeanはSSTableが使用している領域に関して、getLiveDiskSpaceUsed(廃棄されていないファイルのみ含む)とgetTotalDiskSpaceUsed(すべてを含む)の操作を公開しています。 + ColumnFamilyStoreMBeanはSSTableが使用している領域に関して、getLiveDiskSpaceUsed(廃棄されていないファイルのみ含む)とgetTotalDiskSpaceUsed(すべてを含む)の操作を公開しています。 (Memtable/SStableの高レベルな設計と名前は、[[http://labs.google.com/papers/bigtable.html|GoogleのBigtableに関する論文]]のセクション5.3と5.4を参考にしています。ただしコンパクション周りの用語は若干異なります。)
[jira] [Updated] (CASSANDRA-2362) Make cassandra able to work across aws regions out of the box
[ https://issues.apache.org/jira/browse/CASSANDRA-2362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2362: -- Attachment: cassec2regions.patch Milind Parikh's patch from the mailing list Make cassandra able to work across aws regions out of the box - Key: CASSANDRA-2362 URL: https://issues.apache.org/jira/browse/CASSANDRA-2362 Project: Cassandra Issue Type: Improvement Reporter: Jeremy Hanna Labels: ec2 Attachments: cassec2regions.patch There has been some work done to get cassandra to work across aws ec2 regions but it involves patching cassandra's location code to do so. It would be nice if that work could be generalized and make it into trunk. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-2362) Make cassandra able to work across aws regions out of the box
[ https://issues.apache.org/jira/browse/CASSANDRA-2362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-2362. --- Resolution: Invalid This boils down to configure listen_address to the public ip. Make cassandra able to work across aws regions out of the box - Key: CASSANDRA-2362 URL: https://issues.apache.org/jira/browse/CASSANDRA-2362 Project: Cassandra Issue Type: Improvement Reporter: Jeremy Hanna Labels: ec2 Attachments: cassec2regions.patch There has been some work done to get cassandra to work across aws ec2 regions but it involves patching cassandra's location code to do so. It would be nice if that work could be generalized and make it into trunk. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-2364) Record dynamic snitch latencies for counter writes
Record dynamic snitch latencies for counter writes -- Key: CASSANDRA-2364 URL: https://issues.apache.org/jira/browse/CASSANDRA-2364 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Stu Hood Priority: Minor Fix For: 0.8 The counter code chooses a single replica to coordinate a write, meaning that it should be subject to dynamic snitch latencies like a read would be. This already works when there are reads going on, because the dynamic snitch read latencies are used to pick a node to coordinate, but when there are no reads going on (such as during a backfill) the latencies do not adjust. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2364) Record dynamic snitch latencies for counter writes
[ https://issues.apache.org/jira/browse/CASSANDRA-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009756#comment-13009756 ] Jonathan Ellis commented on CASSANDRA-2364: --- Disagree that we want to pollute read latency info w/ writes (which are almost impossible to slow down, so you're basically just measuring network latency and mixing that low-value info w/ the high-value signal the reads). Also: always choosing the closest node to be the write coordinator seems like you lose the benefits of counter partitioning. Suggest using static snitch info to prefer a coordinator from the local DC and randomly pick from those. Record dynamic snitch latencies for counter writes -- Key: CASSANDRA-2364 URL: https://issues.apache.org/jira/browse/CASSANDRA-2364 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Stu Hood Priority: Minor Labels: counters Fix For: 0.8 The counter code chooses a single replica to coordinate a write, meaning that it should be subject to dynamic snitch latencies like a read would be. This already works when there are reads going on, because the dynamic snitch read latencies are used to pick a node to coordinate, but when there are no reads going on (such as during a backfill) the latencies do not adjust. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-2365) ByteBufferUtil.read(byte[]) returns 0 when the end of the stream has been reached.
ByteBufferUtil.read(byte[]) returns 0 when the end of the stream has been reached. --- Key: CASSANDRA-2365 URL: https://issues.apache.org/jira/browse/CASSANDRA-2365 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.4 Reporter: Yewei Zhang Priority: Minor read(byte[], int, int) doesn't return -1 when the end of the stream is reached. Instead, it returns 0. len = Math.min(len, copy.remaining()); copy.get(bytes, off, len); return len; copy.remaining() returns 0 when the end of the stream is reached. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2365) ByteBufferUtil.read(byte[]) returns 0 when the end of the stream has been reached.
[ https://issues.apache.org/jira/browse/CASSANDRA-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2365: -- Attachment: 2365.txt patch to return -1 if no more data ByteBufferUtil.read(byte[]) returns 0 when the end of the stream has been reached. --- Key: CASSANDRA-2365 URL: https://issues.apache.org/jira/browse/CASSANDRA-2365 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.4 Reporter: Yewei Zhang Priority: Minor Fix For: 0.7.5 Attachments: 2365.txt read(byte[], int, int) doesn't return -1 when the end of the stream is reached. Instead, it returns 0. len = Math.min(len, copy.remaining()); copy.get(bytes, off, len); return len; copy.remaining() returns 0 when the end of the stream is reached. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2365) ByteBufferUtil.read(byte[]) returns 0 when the end of the stream has been reached.
[ https://issues.apache.org/jira/browse/CASSANDRA-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009784#comment-13009784 ] Sylvain Lebresne commented on CASSANDRA-2365: - +1 ByteBufferUtil.read(byte[]) returns 0 when the end of the stream has been reached. --- Key: CASSANDRA-2365 URL: https://issues.apache.org/jira/browse/CASSANDRA-2365 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.4 Reporter: Yewei Zhang Assignee: Jonathan Ellis Priority: Minor Fix For: 0.7.5 Attachments: 2365.txt read(byte[], int, int) doesn't return -1 when the end of the stream is reached. Instead, it returns 0. len = Math.min(len, copy.remaining()); copy.get(bytes, off, len); return len; copy.remaining() returns 0 when the end of the stream is reached. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
buildbot failure in ASF Buildbot on cassandra-trunk
The Buildbot has detected a new failure of cassandra-trunk on ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/1157 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: Build Source Stamp: [branch cassandra/trunk] 1084315 Blamelist: jbellis BUILD FAILED: failed compile sincerely, -The Buildbot
[jira] [Commented] (CASSANDRA-2227) add cache loading to row/key cache tests
[ https://issues.apache.org/jira/browse/CASSANDRA-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009801#comment-13009801 ] Jonathan Ellis commented on CASSANDRA-2227: --- I did my best to merge this to trunk but testRowCacheLoad is broken. I committed it anyway since the rest of the merge was a bunch of work thanks to counters -- can you check out current trunk and post a fix? add cache loading to row/key cache tests Key: CASSANDRA-2227 URL: https://issues.apache.org/jira/browse/CASSANDRA-2227 Project: Cassandra Issue Type: Test Components: Core Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Priority: Minor Fix For: 0.7.5 Attachments: CASSANDRA-2227-v2.patch, CASSANDRA-2227-v3.patch, CASSANDRA-2227.patch Original Estimate: 2h Remaining Estimate: 2h -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1263) Push replication factor down to the replication strategy
[ https://issues.apache.org/jira/browse/CASSANDRA-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009805#comment-13009805 ] Jon Hermes commented on CASSANDRA-1263: --- There's a slight hiccup with avro filed at AVRO-786 . Inability to compare maps means that it's not possible to round-trip a KsDef test (in DatabaseDescriptorTest). Aside from that, patch posted. Push replication factor down to the replication strategy Key: CASSANDRA-1263 URL: https://issues.apache.org/jira/browse/CASSANDRA-1263 Project: Cassandra Issue Type: Task Components: Core Reporter: Jeremy Hanna Assignee: Jon Hermes Priority: Minor Fix For: 0.8 Attachments: 1263-incomplete.txt Original Estimate: 8h Remaining Estimate: 8h Currently the replication factor is in the keyspace metadata. As we've added the datacenter shard strategy, the replication factor becomes more computed by the replication strategy. It seems reasonable to therefore push the replication factor for the keyspace down to the replication strategy so that it can be handled in one place. This adds on the work being done in CASSANDRA-1066 since that ticket will make the replication strategy a member variable of keyspace metadata instead of just a quasi singleton giving the replication strategy state for each keyspace. That makes it able to have the replication factor. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2191) Multithread across compaction buckets
[ https://issues.apache.org/jira/browse/CASSANDRA-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-2191: Attachment: (was: 0003-Expose-multiple-compactions-via-JMX-and-deprecate-sing.txt) Multithread across compaction buckets - Key: CASSANDRA-2191 URL: https://issues.apache.org/jira/browse/CASSANDRA-2191 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Stu Hood Priority: Critical Labels: compaction Fix For: 0.8 Attachments: 0001-Add-a-compacting-set-to-sstabletracker.txt, 0002-Use-the-compacting-set-of-sstables-to-schedule-multith.txt, 0003-Expose-multiple-compactions-via-JMX-and-deprecate-sing.txt This ticket overlaps with CASSANDRA-1876 to a degree, but the approaches and reasoning are different enough to open a separate issue. The problem with compactions currently is that they compact the set of sstables that existed the moment the compaction started. This means that for longer running compactions (even when running as fast as possible on the hardware), a very large number of new sstables might be created in the meantime. We have observed this proliferation of sstables killing performance during major/high-bucketed compactions. One approach would be to pause compactions in upper buckets (containing larger files) when compactions in lower buckets become possible. While this would likely solve the problem with read performance, it does not actually help us perform compaction any faster, which is a reasonable requirement for other situations. Instead, we need to be able to perform any compactions that are currently required in parallel, independent of what bucket they might be in. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2191) Multithread across compaction buckets
[ https://issues.apache.org/jira/browse/CASSANDRA-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-2191: Attachment: (was: 0002-Use-the-compacting-set-of-sstables-to-schedule-multith.txt) Multithread across compaction buckets - Key: CASSANDRA-2191 URL: https://issues.apache.org/jira/browse/CASSANDRA-2191 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Stu Hood Priority: Critical Labels: compaction Fix For: 0.8 Attachments: 0001-Add-a-compacting-set-to-sstabletracker.txt, 0002-Use-the-compacting-set-of-sstables-to-schedule-multith.txt, 0003-Expose-multiple-compactions-via-JMX-and-deprecate-sing.txt This ticket overlaps with CASSANDRA-1876 to a degree, but the approaches and reasoning are different enough to open a separate issue. The problem with compactions currently is that they compact the set of sstables that existed the moment the compaction started. This means that for longer running compactions (even when running as fast as possible on the hardware), a very large number of new sstables might be created in the meantime. We have observed this proliferation of sstables killing performance during major/high-bucketed compactions. One approach would be to pause compactions in upper buckets (containing larger files) when compactions in lower buckets become possible. While this would likely solve the problem with read performance, it does not actually help us perform compaction any faster, which is a reasonable requirement for other situations. Instead, we need to be able to perform any compactions that are currently required in parallel, independent of what bucket they might be in. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2156) Compaction Throttling
[ https://issues.apache.org/jira/browse/CASSANDRA-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-2156: Attachment: (was: 0001-Throttle-total-compaction-to-a-configurable-throughput.txt) Compaction Throttling - Key: CASSANDRA-2156 URL: https://issues.apache.org/jira/browse/CASSANDRA-2156 Project: Cassandra Issue Type: New Feature Reporter: Stu Hood Fix For: 0.8 Attachments: 0001-Throttle-total-compaction-to-a-configurable-throughput.txt, for-0.6-0001-Throttle-compaction-to-a-fixed-throughput.txt, for-0.6-0002-Make-compaction-throttling-configurable.txt Compaction is currently relatively bursty: we compact as fast as we can, and then we wait for the next compaction to be possible (hurry up and wait). Instead, to properly amortize compaction, you'd like to compact exactly as fast as you need to to keep the sstable count under control. For every new level of compaction, you need to increase the rate that you compact at: a rule of thumb that we're testing on our clusters is to determine the maximum number of buckets a node can support (aka, if the 15th bucket holds 750 GB, we're not going to have more than 15 buckets), and then multiply the flush throughput by the number of buckets to get a minimum compaction throughput to maintain your sstable count. Full explanation: for a min compaction threshold of {{T}}, the bucket at level {{N}} can contain {{SsubN = T^N}} 'units' (unit == memtable's worth of data on disk). Every time a new unit is added, it has a {{1/SsubN}} chance of causing the bucket at level N to fill. If the bucket at level N fills, it causes {{SsubN}} units to be compacted. So, for each active level in your system you have {{SubN * 1 / SsubN}}, or {{1}} amortized unit to compact any time a new unit is added. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2191) Multithread across compaction buckets
[ https://issues.apache.org/jira/browse/CASSANDRA-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-2191: Attachment: (was: 0001-Add-a-compacting-set-to-sstabletracker.txt) Multithread across compaction buckets - Key: CASSANDRA-2191 URL: https://issues.apache.org/jira/browse/CASSANDRA-2191 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Stu Hood Priority: Critical Labels: compaction Fix For: 0.8 Attachments: 0001-Add-a-compacting-set-to-sstabletracker.txt, 0002-Use-the-compacting-set-of-sstables-to-schedule-multith.txt, 0003-Expose-multiple-compactions-via-JMX-and-deprecate-sing.txt This ticket overlaps with CASSANDRA-1876 to a degree, but the approaches and reasoning are different enough to open a separate issue. The problem with compactions currently is that they compact the set of sstables that existed the moment the compaction started. This means that for longer running compactions (even when running as fast as possible on the hardware), a very large number of new sstables might be created in the meantime. We have observed this proliferation of sstables killing performance during major/high-bucketed compactions. One approach would be to pause compactions in upper buckets (containing larger files) when compactions in lower buckets become possible. While this would likely solve the problem with read performance, it does not actually help us perform compaction any faster, which is a reasonable requirement for other situations. Instead, we need to be able to perform any compactions that are currently required in parallel, independent of what bucket they might be in. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2191) Multithread across compaction buckets
[ https://issues.apache.org/jira/browse/CASSANDRA-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-2191: Attachment: 0003-Expose-multiple-compactions-via-JMX-and-deprecate-sing.txt 0002-Use-the-compacting-set-of-sstables-to-schedule-multith.txt 0001-Add-a-compacting-set-to-sstabletracker.txt Rebased for trunk. Multithread across compaction buckets - Key: CASSANDRA-2191 URL: https://issues.apache.org/jira/browse/CASSANDRA-2191 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Stu Hood Priority: Critical Labels: compaction Fix For: 0.8 Attachments: 0001-Add-a-compacting-set-to-sstabletracker.txt, 0002-Use-the-compacting-set-of-sstables-to-schedule-multith.txt, 0003-Expose-multiple-compactions-via-JMX-and-deprecate-sing.txt This ticket overlaps with CASSANDRA-1876 to a degree, but the approaches and reasoning are different enough to open a separate issue. The problem with compactions currently is that they compact the set of sstables that existed the moment the compaction started. This means that for longer running compactions (even when running as fast as possible on the hardware), a very large number of new sstables might be created in the meantime. We have observed this proliferation of sstables killing performance during major/high-bucketed compactions. One approach would be to pause compactions in upper buckets (containing larger files) when compactions in lower buckets become possible. While this would likely solve the problem with read performance, it does not actually help us perform compaction any faster, which is a reasonable requirement for other situations. Instead, we need to be able to perform any compactions that are currently required in parallel, independent of what bucket they might be in. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2156) Compaction Throttling
[ https://issues.apache.org/jira/browse/CASSANDRA-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-2156: Attachment: 0001-Throttle-total-compaction-to-a-configurable-throughput.txt Rebased for trunk: still applies atop CASSANDRA-2191. Compaction Throttling - Key: CASSANDRA-2156 URL: https://issues.apache.org/jira/browse/CASSANDRA-2156 Project: Cassandra Issue Type: New Feature Reporter: Stu Hood Fix For: 0.8 Attachments: 0001-Throttle-total-compaction-to-a-configurable-throughput.txt, for-0.6-0001-Throttle-compaction-to-a-fixed-throughput.txt, for-0.6-0002-Make-compaction-throttling-configurable.txt Compaction is currently relatively bursty: we compact as fast as we can, and then we wait for the next compaction to be possible (hurry up and wait). Instead, to properly amortize compaction, you'd like to compact exactly as fast as you need to to keep the sstable count under control. For every new level of compaction, you need to increase the rate that you compact at: a rule of thumb that we're testing on our clusters is to determine the maximum number of buckets a node can support (aka, if the 15th bucket holds 750 GB, we're not going to have more than 15 buckets), and then multiply the flush throughput by the number of buckets to get a minimum compaction throughput to maintain your sstable count. Full explanation: for a min compaction threshold of {{T}}, the bucket at level {{N}} can contain {{SsubN = T^N}} 'units' (unit == memtable's worth of data on disk). Every time a new unit is added, it has a {{1/SsubN}} chance of causing the bucket at level N to fill. If the bucket at level N fills, it causes {{SsubN}} units to be compacted. So, for each active level in your system you have {{SubN * 1 / SsubN}}, or {{1}} amortized unit to compact any time a new unit is added. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-1263) Push replication factor down to the replication strategy
[ https://issues.apache.org/jira/browse/CASSANDRA-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Hermes updated CASSANDRA-1263: -- Attachment: 1263.txt Push replication factor down to the replication strategy Key: CASSANDRA-1263 URL: https://issues.apache.org/jira/browse/CASSANDRA-1263 Project: Cassandra Issue Type: Task Components: Core Reporter: Jeremy Hanna Assignee: Jon Hermes Priority: Minor Fix For: 0.8 Attachments: 1263-incomplete.txt, 1263.txt Original Estimate: 8h Remaining Estimate: 8h Currently the replication factor is in the keyspace metadata. As we've added the datacenter shard strategy, the replication factor becomes more computed by the replication strategy. It seems reasonable to therefore push the replication factor for the keyspace down to the replication strategy so that it can be handled in one place. This adds on the work being done in CASSANDRA-1066 since that ticket will make the replication strategy a member variable of keyspace metadata instead of just a quasi singleton giving the replication strategy state for each keyspace. That makes it able to have the replication factor. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1263) Push replication factor down to the replication strategy
[ https://issues.apache.org/jira/browse/CASSANDRA-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009811#comment-13009811 ] Jon Hermes commented on CASSANDRA-1263: --- Oh, not quite done. CLI still has bad help messages and it's not running through the code path that validates options on KSMD creation. Push replication factor down to the replication strategy Key: CASSANDRA-1263 URL: https://issues.apache.org/jira/browse/CASSANDRA-1263 Project: Cassandra Issue Type: Task Components: Core Reporter: Jeremy Hanna Assignee: Jon Hermes Priority: Minor Fix For: 0.8 Attachments: 1263-incomplete.txt, 1263.txt Original Estimate: 8h Remaining Estimate: 8h Currently the replication factor is in the keyspace metadata. As we've added the datacenter shard strategy, the replication factor becomes more computed by the replication strategy. It seems reasonable to therefore push the replication factor for the keyspace down to the replication strategy so that it can be handled in one place. This adds on the work being done in CASSANDRA-1066 since that ticket will make the replication strategy a member variable of keyspace metadata instead of just a quasi singleton giving the replication strategy state for each keyspace. That makes it able to have the replication factor. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2364) Record dynamic snitch latencies for counter writes
[ https://issues.apache.org/jira/browse/CASSANDRA-2364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009817#comment-13009817 ] Stu Hood commented on CASSANDRA-2364: - What about tracking write latencies using a separate metric set? Alternatively, if the snitch had less than RF scores, it could choose a random node. Choosing unbalanced write coordinators for counters would only cause imbalances pre-replication: if repair is running frequently (repair-on-write, read-repair, AES, etc), the load is balanced according to the tokens, as usual. I agree that adding things that are really counter specific is not healthy, unless they also help to set us up for other atomic features in the future. Record dynamic snitch latencies for counter writes -- Key: CASSANDRA-2364 URL: https://issues.apache.org/jira/browse/CASSANDRA-2364 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Stu Hood Priority: Minor Labels: counters Fix For: 0.8 The counter code chooses a single replica to coordinate a write, meaning that it should be subject to dynamic snitch latencies like a read would be. This already works when there are reads going on, because the dynamic snitch read latencies are used to pick a node to coordinate, but when there are no reads going on (such as during a backfill) the latencies do not adjust. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-2366) Remove more uses of SSTableUtils.writeRaw
Remove more uses of SSTableUtils.writeRaw - Key: CASSANDRA-2366 URL: https://issues.apache.org/jira/browse/CASSANDRA-2366 Project: Cassandra Issue Type: Test Reporter: Stu Hood writeRaw uses the binary memtable write path, and is consequently 1. complex, 2. fragile. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-2366) Remove more uses of SSTableUtils.writeRaw
[ https://issues.apache.org/jira/browse/CASSANDRA-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood reassigned CASSANDRA-2366: --- Assignee: Stu Hood Remove more uses of SSTableUtils.writeRaw - Key: CASSANDRA-2366 URL: https://issues.apache.org/jira/browse/CASSANDRA-2366 Project: Cassandra Issue Type: Test Reporter: Stu Hood Assignee: Stu Hood writeRaw uses the binary memtable write path, and is consequently 1. complex, 2. fragile. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2366) Remove more uses of SSTableUtils.writeRaw
[ https://issues.apache.org/jira/browse/CASSANDRA-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stu Hood updated CASSANDRA-2366: Attachment: 0001-CASSANDRA-2366-Remove-some-uses-of-SSTableUtils.writeR.txt Replaces writeRaw uses in SSTableWriter*Test, but leaves a usage in SSTableTest. If/when we change the file format we can remove that final usage. Remove more uses of SSTableUtils.writeRaw - Key: CASSANDRA-2366 URL: https://issues.apache.org/jira/browse/CASSANDRA-2366 Project: Cassandra Issue Type: Test Reporter: Stu Hood Assignee: Stu Hood Fix For: 0.8 Attachments: 0001-CASSANDRA-2366-Remove-some-uses-of-SSTableUtils.writeR.txt writeRaw uses the binary memtable write path, and is consequently 1. complex, 2. fragile. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2187) Cassandra Cli hangs forever if schema does not settle within timeout window
[ https://issues.apache.org/jira/browse/CASSANDRA-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthew F. Dennis updated CASSANDRA-2187: - Reviewer: jbellis Affects Version/s: 0.7.2 Fix Version/s: 0.7.3 Cassandra Cli hangs forever if schema does not settle within timeout window --- Key: CASSANDRA-2187 URL: https://issues.apache.org/jira/browse/CASSANDRA-2187 Project: Cassandra Issue Type: Bug Affects Versions: 0.7.2 Reporter: Chris Goffinet Assignee: Chris Goffinet Priority: Minor Fix For: 0.7.3 Attachments: 0001-Fix-Cassandra-cli-to-respect-timeout-if-schema-does-.patch validateSchemaIsSettled will hang in the while loop since we never update start if migrations never settle. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1084405 - /cassandra/trunk/test/unit/org/apache/cassandra/db/RowCacheTest.java
Author: jbellis Date: Tue Mar 22 23:06:44 2011 New Revision: 1084405 URL: http://svn.apache.org/viewvc?rev=1084405view=rev Log: fix build Modified: cassandra/trunk/test/unit/org/apache/cassandra/db/RowCacheTest.java Modified: cassandra/trunk/test/unit/org/apache/cassandra/db/RowCacheTest.java URL: http://svn.apache.org/viewvc/cassandra/trunk/test/unit/org/apache/cassandra/db/RowCacheTest.java?rev=1084405r1=1084404r2=1084405view=diff == --- cassandra/trunk/test/unit/org/apache/cassandra/db/RowCacheTest.java (original) +++ cassandra/trunk/test/unit/org/apache/cassandra/db/RowCacheTest.java Tue Mar 22 23:06:44 2011 @@ -19,6 +19,7 @@ package org.apache.cassandra.db; import java.util.Collection; +import java.util.Set; import org.junit.Test; @@ -133,7 +134,7 @@ public class RowCacheTest extends Cleanu assert store.getRowCacheSize() == 0; // load the cache from disk -store.rowCache.readSaved(); +store.initCaches(); assert store.getRowCacheSize() == 100; for (int i = 0; i 100; i++)
[jira] [Commented] (CASSANDRA-2227) add cache loading to row/key cache tests
[ https://issues.apache.org/jira/browse/CASSANDRA-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009896#comment-13009896 ] Jonathan Ellis commented on CASSANDRA-2227: --- committed, thanks! add cache loading to row/key cache tests Key: CASSANDRA-2227 URL: https://issues.apache.org/jira/browse/CASSANDRA-2227 Project: Cassandra Issue Type: Test Components: Core Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Priority: Minor Fix For: 0.7.5 Attachments: CASSANDRA-2227-v2.patch, CASSANDRA-2227-v3.patch, CASSANDRA-2227-v4-fix-rowCache-trunk.patch, CASSANDRA-2227.patch Original Estimate: 2h Remaining Estimate: 2h -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
buildbot success in ASF Buildbot on cassandra-trunk
The Buildbot has detected a restored build of cassandra-trunk on ASF Buildbot. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/1158 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: isis_ubuntu Build Reason: Build Source Stamp: [branch cassandra/trunk] 1084405 Blamelist: jbellis Build succeeded! sincerely, -The Buildbot
[jira] [Updated] (CASSANDRA-2344) generate CQL python driver artifacts for release
[ https://issues.apache.org/jira/browse/CASSANDRA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Evans updated CASSANDRA-2344: -- Reviewer: brandon.williams (was: stephenc) generate CQL python driver artifacts for release Key: CASSANDRA-2344 URL: https://issues.apache.org/jira/browse/CASSANDRA-2344 Project: Cassandra Issue Type: Improvement Components: Packaging Reporter: Eric Evans Assignee: Eric Evans Priority: Minor Labels: cql Fix For: 0.8 Attachments: v1-0001-CASSANDRA-2344-create-python-release-artifacts.txt Create release artifacts for the Python (and Twisted Python) CQL drivers. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2344) generate CQL python driver artifacts for release
[ https://issues.apache.org/jira/browse/CASSANDRA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009910#comment-13009910 ] Brandon Williams commented on CASSANDRA-2344: - +1 generate CQL python driver artifacts for release Key: CASSANDRA-2344 URL: https://issues.apache.org/jira/browse/CASSANDRA-2344 Project: Cassandra Issue Type: Improvement Components: Packaging Reporter: Eric Evans Assignee: Eric Evans Priority: Minor Labels: cql Fix For: 0.8 Attachments: v1-0001-CASSANDRA-2344-create-python-release-artifacts.txt Create release artifacts for the Python (and Twisted Python) CQL drivers. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1084426 - /cassandra/trunk/build.xml
Author: eevans Date: Wed Mar 23 00:34:32 2011 New Revision: 1084426 URL: http://svn.apache.org/viewvc?rev=1084426view=rev Log: CASSANDRA-2344 create python release artifacts Patch by eevans for CASSANDRA-2344 Modified: cassandra/trunk/build.xml Modified: cassandra/trunk/build.xml URL: http://svn.apache.org/viewvc/cassandra/trunk/build.xml?rev=1084426r1=1084425r2=1084426view=diff == --- cassandra/trunk/build.xml (original) +++ cassandra/trunk/build.xml Wed Mar 23 00:34:32 2011 @@ -444,7 +444,7 @@ /target !-- creates release tarballs -- -target name=artifacts depends=jar,javadoc +target name=artifacts depends=jar,javadoc,py-cql-driver,tx-cql-driver description=Create Cassandra release artifacts mkdir dir=${dist.dir}/ copy todir=${dist.dir}/lib @@ -529,26 +529,18 @@ target name=release depends=artifacts,rat-init description=Create and QC release artifacts - checksum file=${build.dir}/${final.name}-bin.tar.gz -forceOverwrite=yes -todir=${build.dir} -fileext=.md5 -algorithm=MD5 / - checksum file=${build.dir}/${final.name}-src.tar.gz -forceOverwrite=yes -todir=${build.dir} -fileext=.md5 -algorithm=MD5 / - checksum file=${build.dir}/${final.name}-bin.tar.gz -forceOverwrite=yes -todir=${build.dir} -fileext=.sha -algorithm=SHA / - checksum file=${build.dir}/${final.name}-src.tar.gz -forceOverwrite=yes -todir=${build.dir} -fileext=.sha -algorithm=SHA / + checksum forceOverwrite=yes todir=${build.dir} fileext=.md5 +algorithm=MD5 +fileset dir=${build.dir} + include name=*.tar.gz / +/fileset + /checksum + checksum forceOverwrite=yes todir=${build.dir} fileext=.sha +algorithm=SHA +fileset dir=${build.dir} + include name=*.tar.gz / +/fileset + /checksum rat:report xmlns:rat=antlib:org.apache.rat.anttasks reportFile=${build.dir}/${final.name}-bin.rat.txt @@ -887,4 +879,25 @@ delete dir=build/eclipse-classes / /target + target name=py-cql-driver + description=Generate Python CQL driver artifact +echoCreating Python CQL driver artifact.../echo +exec executable=python dir=${basedir}/drivers/py failonerror=true + arg line=setup.py / + arg line=sdist / + arg line=--dist-dir / + arg line=${build.dir} / +/exec + /target + + target name=tx-cql-driver + description=Generate Twisted CQL driver artifact +echoCreating Twisted CQL driver artifact.../echo +exec executable=python dir=${basedir}/drivers/txpy failonerror=true + arg line=setup.py / + arg line=sdist / + arg line=--dist-dir / + arg line=${build.dir} / +/exec + /target /project
svn commit: r1084437 - in /cassandra/trunk/doc/cql: CQL.html CQL.textile
Author: eevans Date: Wed Mar 23 01:09:46 2011 New Revision: 1084437 URL: http://svn.apache.org/viewvc?rev=1084437view=rev Log: bump CQL doc version to 1.0 Patch by eevans Modified: cassandra/trunk/doc/cql/CQL.html cassandra/trunk/doc/cql/CQL.textile Modified: cassandra/trunk/doc/cql/CQL.html URL: http://svn.apache.org/viewvc/cassandra/trunk/doc/cql/CQL.html?rev=1084437r1=1084436r2=1084437view=diff == --- cassandra/trunk/doc/cql/CQL.html (original) +++ cassandra/trunk/doc/cql/CQL.html Wed Mar 23 01:09:46 2011 @@ -1,4 +1,4 @@ -?xml version='1.0' encoding='utf-8' ?!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.0 Transitional//EN http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd;html xmlns=http://www.w3.org/1999/xhtml;headmeta http-equiv=Content-Type content=text/html; charset=utf-8//headbodyh1 id=CassandraQueryLanguageCQLv0.99.1Cassandra Query Language (CQL) v0.99.1/h1h2 id=TableofContentsTable of Contents/h2ol style=list-style: none;lia href=#CassandraQueryLanguageCQLv0.99.1Cassandra Query Language (CQL) v0.99.1/aol style=list-style: none;lia href=#TableofContentsTable of Contents/a/lilia href=#USEUSE/a/lilia href=#SELECTSELECT/aol style=list-style: none;lia href=#SpecifyingColumnsSpecifying Columns/a/lilia href=#ColumnFamilyColumn Family/a/lilia href=#ConsistencyLevelConsistency Level/a/lilia href=#FilteringrowsFiltering rows/a/lilia href=#LimitsLimits/a/li/ol/li lia href=#UPDATEUPDATE/aol style=list-style: none;lia href=#ColumnFamily2Column Family/a/lilia href=#ConsistencyLevel2Consistency Level/a/lilia href=#SpecifyingColumnsandRowSpecifying Columns and Row/a/li/ol/lilia href=#DELETEDELETE/aol style=list-style: none;lia href=#SpecifyingColumns2Specifying Columns/a/lilia href=#ColumnFamily3Column Family/a/lilia href=#ConsistencyLevel3Consistency Level/a/lilia href=#deleterowsSpecifying Rows/a/li/ol/lilia href=#TRUNCATETRUNCATE/a/lilia href=#CREATEKEYSPACECREATE KEYSPACE/a/lilia href=#CREATECOLUMNFAMILYCREATE COLUMNFAMILY/aol style=list-style: none;lia href=#columntypesSpecifying Column Type (optional)/a/lilia href=#ColumnFamilyOptionsoptionalColumn Family Options (optional)/a/li/ol/lilia href=#CREATEINDEXCREATE INDEX/a/lilia href=#DROPDROP/a/lilia href=# CommonIdiomsCommon Idioms/aol style=list-style: none;lia href=#consistencySpecifying Consistency/a/lilia href=#termsTerm specification/a/li/ol/li/ol/li/olh2 id=USEUSE/h2piSynopsis:/i/pprecodeUSE lt;KEYSPACEgt;; +?xml version='1.0' encoding='utf-8' ?!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.0 Transitional//EN http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd;html xmlns=http://www.w3.org/1999/xhtml;headmeta http-equiv=Content-Type content=text/html; charset=utf-8//headbodyh1 id=CassandraQueryLanguageCQLv1.0.0Cassandra Query Language (CQL) v1.0.0/h1h2 id=TableofContentsTable of Contents/h2ol style=list-style: none;lia href=#CassandraQueryLanguageCQLv1.0.0Cassandra Query Language (CQL) v1.0.0/aol style=list-style: none;lia href=#TableofContentsTable of Contents/a/lilia href=#USEUSE/a/lilia href=#SELECTSELECT/aol style=list-style: none;lia href=#SpecifyingColumnsSpecifying Columns/a/lilia href=#ColumnFamilyColumn Family/a/lilia href=#ConsistencyLevelConsistency Level/a/lilia href=#FilteringrowsFiltering rows/a/lilia href=#LimitsLimits/a/li/ol/lili a href=#UPDATEUPDATE/aol style=list-style: none;lia href=#ColumnFamily2Column Family/a/lilia href=#ConsistencyLevel2Consistency Level/a/lilia href=#SpecifyingColumnsandRowSpecifying Columns and Row/a/li/ol/lilia href=#DELETEDELETE/aol style=list-style: none;lia href=#SpecifyingColumns2Specifying Columns/a/lilia href=#ColumnFamily3Column Family/a/lilia href=#ConsistencyLevel3Consistency Level/a/lilia href=#deleterowsSpecifying Rows/a/li/ol/lilia href=#TRUNCATETRUNCATE/a/lilia href=#CREATEKEYSPACECREATE KEYSPACE/a/lilia href=#CREATECOLUMNFAMILYCREATE COLUMNFAMILY/aol style=list-style: none;lia href=#columntypesSpecifying Column Type (optional)/a/lilia href=#ColumnFamilyOptionsoptionalColumn Family Options (optional)/a/li/ol/lilia href=#CREATEINDEXCREATE INDEX/a/lilia href=#DROPDROP/a/lilia href=#Comm onIdiomsCommon Idioms/aol style=list-style: none;lia href=#consistencySpecifying Consistency/a/lilia href=#termsTerm specification/a/li/ol/li/ol/lilia href=#VersioningVersioning/a/lilia href=#ChangesChanges/a/li/olh2 id=USEUSE/h2piSynopsis:/i/pprecodeUSE lt;KEYSPACEgt;; /code/prepA codeUSE/code statement consists of the codeUSE/code keyword, followed by a valid keyspace name. Its purpose is to assign the per-connection, current working keyspace. All subsequent keyspace-specific actions will be performed in the context of the supplied value./ph2 id=SELECTSELECT/h2piSynopsis:/i/pprecodeSELECT [FIRST N] [REVERSED] lt;SELECT EXPRgt; FROM lt;COLUMN FAMILYgt; [USING lt;CONSISTENCYgt;] [WHERE lt;CLAUSEgt;] [LIMIT N]; /code/prepA codeSELECT/code is
[jira] [Commented] (CASSANDRA-2080) Upgrade to release of Whirr 0.3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009947#comment-13009947 ] Hudson commented on CASSANDRA-2080: --- Integrated in Cassandra #797 (See [https://hudson.apache.org/hudson/job/Cassandra/797/]) Upgrade to release of Whirr 0.3.0 - Key: CASSANDRA-2080 URL: https://issues.apache.org/jira/browse/CASSANDRA-2080 Project: Cassandra Issue Type: Improvement Reporter: Stu Hood Assignee: Stu Hood Priority: Trivial Fix For: 0.7.5 Attachments: 0001-Fetch-whirr-from-Maven-central.txt Whirr 0.3.0 has been released. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2344) generate CQL python driver artifacts for release
[ https://issues.apache.org/jira/browse/CASSANDRA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009946#comment-13009946 ] Hudson commented on CASSANDRA-2344: --- Integrated in Cassandra #797 (See [https://hudson.apache.org/hudson/job/Cassandra/797/]) CASSANDRA-2344 create python release artifacts Patch by eevans for CASSANDRA-2344 generate CQL python driver artifacts for release Key: CASSANDRA-2344 URL: https://issues.apache.org/jira/browse/CASSANDRA-2344 Project: Cassandra Issue Type: Improvement Components: Packaging Reporter: Eric Evans Assignee: Eric Evans Priority: Minor Labels: cql Fix For: 0.8 Attachments: v1-0001-CASSANDRA-2344-create-python-release-artifacts.txt Create release artifacts for the Python (and Twisted Python) CQL drivers. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2351) Null CF comments should be allowed
[ https://issues.apache.org/jira/browse/CASSANDRA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009948#comment-13009948 ] Hudson commented on CASSANDRA-2351: --- Integrated in Cassandra #797 (See [https://hudson.apache.org/hudson/job/Cassandra/797/]) Null CF comments should be allowed -- Key: CASSANDRA-2351 URL: https://issues.apache.org/jira/browse/CASSANDRA-2351 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.8 Reporter: Gary Dusbabek Assignee: Jon Hermes Priority: Minor Fix For: 0.8 Attachments: 2351.txt, null_cf_comment_test_case.patch Prior to 1906, cassandra tolerated null CF comments. They were converted to empty quotes when the CFM was created. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2352) zero-length strings should result in zero-length ByteBuffers
[ https://issues.apache.org/jira/browse/CASSANDRA-2352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009949#comment-13009949 ] Hudson commented on CASSANDRA-2352: --- Integrated in Cassandra #797 (See [https://hudson.apache.org/hudson/job/Cassandra/797/]) zero-length strings should result in zero-length ByteBuffers Key: CASSANDRA-2352 URL: https://issues.apache.org/jira/browse/CASSANDRA-2352 Project: Cassandra Issue Type: Bug Components: API, Core Reporter: Eric Evans Assignee: Eric Evans Labels: cql Fix For: 0.8 Attachments: v1-0001-CASSANDRA-2352-AT.fromString-should-return-empty-BB-fo.txt The {{o.a.c.db.marshal.AbstractType.fromString()}} methods should return an empty {{ByteBuffer}} when passed a zero-length string, (empty bytes do {{validate()}} properly). -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1084461 - in /cassandra/branches/cassandra-0.7: CHANGES.txt src/java/org/apache/cassandra/utils/ByteBufferUtil.java
Author: jbellis Date: Wed Mar 23 02:30:05 2011 New Revision: 1084461 URL: http://svn.apache.org/viewvc?rev=1084461view=rev Log: fix potential infinite loop in ByteBufferUtil.inputStream patch by jbellis; reviewed by slebresne for CASSANDRA-2365 Modified: cassandra/branches/cassandra-0.7/CHANGES.txt cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/ByteBufferUtil.java Modified: cassandra/branches/cassandra-0.7/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/CHANGES.txt?rev=1084461r1=1084460r2=1084461view=diff == --- cassandra/branches/cassandra-0.7/CHANGES.txt (original) +++ cassandra/branches/cassandra-0.7/CHANGES.txt Wed Mar 23 02:30:05 2011 @@ -14,6 +14,7 @@ the same threshold for TTL expiration (CASSANDRA-2349) * fix race when iterating CFs during add/drop (CASSANDRA-2350) * add ConsistencyLevel command to CLI (CASSANDRA-2354) + * fix potential infinite loop in ByteBufferUtil.inputStream (CASSANDRA-2365) 0.7.4 Modified: cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/ByteBufferUtil.java URL: http://svn.apache.org/viewvc/cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/ByteBufferUtil.java?rev=1084461r1=1084460r2=1084461view=diff == --- cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/ByteBufferUtil.java (original) +++ cassandra/branches/cassandra-0.7/src/java/org/apache/cassandra/utils/ByteBufferUtil.java Wed Mar 23 02:30:05 2011 @@ -363,9 +363,11 @@ public class ByteBufferUtil @Override public int read(byte[] bytes, int off, int len) throws IOException { +if (!copy.hasRemaining()) +return -1; + len = Math.min(len, copy.remaining()); copy.get(bytes, off, len); - return len; }
[Cassandra Wiki] Update of FAQ_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The FAQ_JP page has been changed by MakiWatanabe. The comment on this change is: Translate jmx_localhost_refused. http://wiki.apache.org/cassandra/FAQ_JP?action=diffrev1=68rev2=69 -- Anchor(jmx_localhost_refused) - == Nodetool says Connection refused to host: 127.0.1.1 for any remote host. What gives? == - Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up it's own listeners and connectors as needed on each end of the exchange. Normally all of this happens behind the scenes transparently, but incorrect name resolution for either the host connecting, or the one being connected to, can result in crossed wires and confusing exceptions. + == Nodetoolをどのホストに使用しても Connection refused to host: 127.0.1.1 と言われるのはなぜでしょうか? == + NodetoolはJMXに依存しています。JMXはRMIに依存しており、独自のリスナとコネクタをデータ交換の必要に応じて生成します。通常はこれらはバックグラウンドで透過的に動作しますが、接続元、あるいは接続先のホスト名の名前解決が正しく行われないと、混乱させられるような例外が発生する場合があります。 - If you are not using DNS, then make sure that your `/etc/hosts` files are accurate on both ends. If that fails try passing the `-Djava.rmi.server.hostname=$IP` option to the JVM at startup (where `$IP` is the address of the interface you can reach from the remote machine). + もしDNSを使用していないのであれば、接続元・接続先双方のホストが正しく`/etc/hosts`に登録されていることを確認してください。それでもうまくいかない場合は、JVMの起動時に`-Djava.rmi.server.hostname=$IP`を指定して下さい。ここで、$IPは他のホストからそのホストに接続できるIPアドレスです。 Anchor(iter_world)
[Cassandra Wiki] Update of FAQ_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The FAQ_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/FAQ_JP?action=diffrev1=69rev2=70 -- Anchor(jmx_localhost_refused) - == Nodetool says Connection refused to host: 127.0.1.1 for any remote host. What gives? == - Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up it's own listeners and connectors as needed on each end of the exchange. Normally all of this happens behind the scenes transparently, but incorrect name resolution for either the host connecting, or the one being connected to, can result in crossed wires and confusing exceptions. - - If you are not using DNS, then make sure that your `/etc/hosts` files are accurate on both ends. If that fails try passing the `-Djava.rmi.server.hostname=$IP` option to the JVM at startup (where `$IP` is the address of the interface you can reach from the remote machine). - - Anchor(jmx_localhost_refused) - == Nodetoolをどのホストに使用しても Connection refused to host: 127.0.1.1 と言われるのはなぜでしょうか? == NodetoolはJMXに依存しています。JMXはRMIに依存しており、独自のリスナとコネクタをデータ交換の必要に応じて生成します。通常はこれらはバックグラウンドで透過的に動作しますが、接続元、あるいは接続先のホスト名の名前解決が正しく行われないと、混乱させられるような例外が発生する場合があります。
[Cassandra Wiki] Update of FAQ_JP by MakiWatanabe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The FAQ_JP page has been changed by MakiWatanabe. http://wiki.apache.org/cassandra/FAQ_JP?action=diffrev1=70rev2=71 -- Anchor(jmx_localhost_refused) - == Nodetoolをどのホストに使用しても Connection refused to host: 127.0.1.1 と言われるのはなぜでしょうか? == + == Nodetoolをどのリモートホストに実行してもConnection refused to host: 127.0.1.1と言われます.何が起きているのでしょうか? == NodetoolはJMXに依存しています。JMXはRMIに依存しており、独自のリスナとコネクタをデータ交換の必要に応じて生成します。通常はこれらはバックグラウンドで透過的に動作しますが、接続元、あるいは接続先のホスト名の名前解決が正しく行われないと、混乱させられるような例外が発生する場合があります。 もしDNSを使用していないのであれば、接続元・接続先双方のホストが正しく`/etc/hosts`に登録されていることを確認してください。それでもうまくいかない場合は、JVMの起動時に`-Djava.rmi.server.hostname=$IP`を指定して下さい。ここで、$IPは他のホストからそのホストに接続できるIPアドレスです。
[jira] [Commented] (CASSANDRA-2227) add cache loading to row/key cache tests
[ https://issues.apache.org/jira/browse/CASSANDRA-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009976#comment-13009976 ] Hudson commented on CASSANDRA-2227: --- Integrated in Cassandra-0.7 #399 (See [https://hudson.apache.org/hudson/job/Cassandra-0.7/399/]) add cache loading to row/key cache tests Key: CASSANDRA-2227 URL: https://issues.apache.org/jira/browse/CASSANDRA-2227 Project: Cassandra Issue Type: Test Components: Core Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Priority: Minor Fix For: 0.7.5 Attachments: CASSANDRA-2227-v2.patch, CASSANDRA-2227-v3.patch, CASSANDRA-2227-v4-fix-rowCache-trunk.patch, CASSANDRA-2227.patch Original Estimate: 2h Remaining Estimate: 2h -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2365) ByteBufferUtil.read(byte[]) returns 0 when the end of the stream has been reached.
[ https://issues.apache.org/jira/browse/CASSANDRA-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13009977#comment-13009977 ] Hudson commented on CASSANDRA-2365: --- Integrated in Cassandra-0.7 #399 (See [https://hudson.apache.org/hudson/job/Cassandra-0.7/399/]) fix potential infinite loop in ByteBufferUtil.inputStream patch by jbellis; reviewed by slebresne for CASSANDRA-2365 ByteBufferUtil.read(byte[]) returns 0 when the end of the stream has been reached. --- Key: CASSANDRA-2365 URL: https://issues.apache.org/jira/browse/CASSANDRA-2365 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.4 Reporter: Yewei Zhang Assignee: Jonathan Ellis Priority: Minor Fix For: 0.7.5 Attachments: 2365.txt read(byte[], int, int) doesn't return -1 when the end of the stream is reached. Instead, it returns 0. len = Math.min(len, copy.remaining()); copy.get(bytes, off, len); return len; copy.remaining() returns 0 when the end of the stream is reached. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira