elaborate on why cache changes matter from end-user perspective
Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87c068e4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87c068e4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87c068e4 Branch: refs/heads/trunk Commit: 87c068e40bc33ddb356482ef8151738e97be8f84 Parents: 3a0cc8b Author: Jonathan Ellis <[email protected]> Authored: Fri Apr 20 11:15:19 2012 -0500 Committer: Jonathan Ellis <[email protected]> Committed: Tue Apr 24 13:11:36 2012 -0500 ---------------------------------------------------------------------- NEWS.txt | 12 +++++++----- 1 files changed, 7 insertions(+), 5 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/cassandra/blob/87c068e4/NEWS.txt ---------------------------------------------------------------------- diff --git a/NEWS.txt b/NEWS.txt index dc2c476..9032e13 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -80,12 +80,14 @@ Features the cluster. This is useful for cases such as testing different compaction strategies with live traffic without affecting the cluster. - Key and row caches are now global, similar to the global memtable - threshold. - - Off-heap caches no longer require JNA. + threshold. Manual tuning of cache sizes per-columnfamily is no longer + required. + - Off-heap caches no longer require JNA, and will work out of the box + on Windows as well as Unix platforms. - Streaming is now multithreaded. - Compactions may now be aborted via JMX or nodetool. - The stress tool is not new in 1.1, but it is newly included in - binary builds as well as the source tree + binary builds now, as well as the source tree. - Hadoop: a new BulkOutputFormat is included which will directly write SSTables locally and then stream them into the cluster. YOU SHOULD USE BulkOutputFormat BY DEFAULT. ColumnFamilyOutputFormat @@ -94,10 +96,10 @@ Features more efficient. - Hadoop: KeyRange.filter is now supported with ColumnFamilyInputFormat, allowing index expressions to be evaluated server-side to reduce - the amount of data sent to Hadoop + the amount of data sent to Hadoop. - Hadoop: ColumnFamilyRecordReader has a wide-row mode, enabled via a boolean parameter to setInputColumnFamily, that pages through - data column-at-a-time instead of row-at-a-time + data column-at-a-time instead of row-at-a-time.
