Merge branch 'cassandra-1.1' into cassandra-1.2

Conflicts:
        src/java/org/apache/cassandra/thrift/CassandraServer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18a1a4b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18a1a4b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18a1a4b9

Branch: refs/heads/trunk
Commit: 18a1a4b93e50d3b11ca570039dafa186f1624f41
Parents: 1891316 1cbbba0
Author: Yuki Morishita <yu...@apache.org>
Authored: Fri Jan 11 11:03:13 2013 -0600
Committer: Yuki Morishita <yu...@apache.org>
Committed: Fri Jan 11 11:03:13 2013 -0600

----------------------------------------------------------------------
 CHANGES.txt                                        |    1 +
 .../apache/cassandra/thrift/CassandraServer.java   |    4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18a1a4b9/CHANGES.txt
----------------------------------------------------------------------
diff --cc CHANGES.txt
index b3d5dd7,82f503c..b34a97c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,197 -1,37 +1,198 @@@
 -1.1.9
 +1.2.1
 + * re-allow wrapping ranges for start_token/end_token range pairing 
(CASSANDRA-5106)
 + * fix validation compaction of empty rows (CASSADRA-5136)
 + * nodetool methods to enable/disable hint storage/delivery (CASSANDRA-4750)
 + * disallow bloom filter false positive chance of 0 (CASSANDRA-5013)
 + * add threadpool size adjustment methods to JMXEnabledThreadPoolExecutor and 
 +   CompactionManagerMBean (CASSANDRA-5044)
 + * fix hinting for dropped local writes (CASSANDRA-4753)
 + * off-heap cache doesn't need mutable column container (CASSANDRA-5057)
 + * apply disk_failure_policy to bad disks on initial directory creation 
 +   (CASSANDRA-4847)
 + * Optimize name-based queries to use ArrayBackedSortedColumns 
(CASSANDRA-5043)
 + * Fall back to old manifest if most recent is unparseable (CASSANDRA-5041)
 + * pool [Compressed]RandomAccessReader objects on the partitioned read path
 +   (CASSANDRA-4942)
 + * Add debug logging to list filenames processed by Directories.migrateFile 
 +   method (CASSANDRA-4939)
 + * Expose black-listed directories via JMX (CASSANDRA-4848)
 + * Log compaction merge counts (CASSANDRA-4894)
 + * Minimize byte array allocation by AbstractData{Input,Output} 
(CASSANDRA-5090)
 + * Add SSL support for the binary protocol (CASSANDRA-5031)
 + * Allow non-schema system ks modification for shuffle to work 
(CASSANDRA-5097)
 + * cqlsh: Add default limit to SELECT statements (CASSANDRA-4972)
 + * cqlsh: fix DESCRIBE for 1.1 cfs in CQL3 (CASSANDRA-5101)
 + * Correctly gossip with nodes >= 1.1.7 (CASSANDRA-5102)
 + * Ensure CL guarantees on digest mismatch (CASSANDRA-5113)
 + * Validate correctly selects on composite partition key (CASSANDRA-5122)
 + * Fix exception when adding collection (CASSANDRA-5117)
 + * Handle states for non-vnode clusters correctly (CASSANDRA-5127)
 + * Refuse unrecognized replication strategy options (CASSANDRA-4795)
 + * Pick the correct value validator in sstable2json for cql3 tables 
(CASSANDRA-5134)
 + * Validate login for describe_keyspace, describe_keyspaces and set_keyspace
 +   (CASSANDRA-5144)
 + * Fix inserting empty maps (CASSANDRA-5141)
 +Merged from 1.1:
   * Simplify CompressedRandomAccessReader to work around JDK FD bug 
(CASSANDRA-5088)
   * Improve handling a changing target throttle rate mid-compaction 
(CASSANDRA-5087)
 - * fix multithreaded compaction deadlock (CASSANDRA-4492)
 - * fix specifying and altering crc_check_chance (CASSANDRA-5053)
 - * Don't expire columns sooner than they should in 2ndary indexes 
(CASSANDRA-5079)
   * Pig: correctly decode row keys in widerow mode (CASSANDRA-5098)
   * nodetool repair command now prints progress (CASSANDRA-4767)
 + * Ensure Jackson dependency matches lib (CASSANDRA-5126)
   * fix user defined compaction to run against 1.1 data directory 
(CASSANDRA-5118)
   * Fix CQL3 BATCH authorization caching (CASSANDRA-5145)
+  * fix get_count returns incorrect value with TTL (CASSANDRA-5099)
  
  
 -1.1.8
 - * reset getRangeSlice filter after finishing a row for get_paged_slice
 -   (CASSANDRA-4919)
 +1.2.0
 + * Disallow counters in collections (CASSANDRA-5082)
 + * cqlsh: add unit tests (CASSANDRA-3920)
 + * fix default bloom_filter_fp_chance for LeveledCompactionStrategy 
(CASSANDRA-5093)
 +Merged from 1.1:
 + * add validation for get_range_slices with start_key and end_token 
(CASSANDRA-5089)
 +
 +
 +1.2.0-rc2
 + * fix nodetool ownership display with vnodes (CASSANDRA-5065)
 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060)
 + * Fix potential infinite loop when reloading CFS (CASSANDRA-5064)
 + * Fix SimpleAuthorizer example (CASSANDRA-5072)
 + * cqlsh: force CL.ONE for tracing and system.schema* queries (CASSANDRA-5070)
 + * Includes cassandra-shuffle in the debian package (CASSANDRA-5058)
 +Merged from 1.1:
 + * fix multithreaded compaction deadlock (CASSANDRA-4492)
   * fix temporarily missing schema after upgrade from pre-1.1.5 
(CASSANDRA-5061)
 + * Fix ALTER TABLE overriding compression options with defaults
 +   (CASSANDRA-4996, 5066)
 + * fix specifying and altering crc_check_chance (CASSANDRA-5053)
 + * fix Murmur3Partitioner ownership% calculation (CASSANDRA-5076)
 + * Don't expire columns sooner than they should in 2ndary indexes 
(CASSANDRA-5079)
 +
 +
 +1.2-rc1
 + * rename rpc_timeout settings to request_timeout (CASSANDRA-5027)
 + * add BF with 0.1 FP to LCS by default (CASSANDRA-5029)
 + * Fix preparing insert queries (CASSANDRA-5016)
 + * Fix preparing queries with counter increment (CASSANDRA-5022)
 + * Fix preparing updates with collections (CASSANDRA-5017)
 + * Don't generate UUID based on other node address (CASSANDRA-5002)
 + * Fix message when trying to alter a clustering key type (CASSANDRA-5012)
 + * Update IAuthenticator to match the new IAuthorizer (CASSANDRA-5003)
 + * Fix inserting only a key in CQL3 (CASSANDRA-5040)
 + * Fix CQL3 token() function when used with strings (CASSANDRA-5050)
 +Merged from 1.1:
   * reduce log spam from invalid counter shards (CASSANDRA-5026)
   * Improve schema propagation performance (CASSANDRA-5025)
 - * Fall back to old describe_splits if d_s_ex is not available 
(CASSANDRA-4803)
 - * Improve error reporting when streaming ranges fail (CASSANDRA-5009)
 + * Fix for IndexHelper.IndexFor throws OOB Exception (CASSANDRA-5030)
 + * cqlsh: make it possible to describe thrift CFs (CASSANDRA-4827)
   * cqlsh: fix timestamp formatting on some platforms (CASSANDRA-5046)
 - * Fix ALTER TABLE overriding compression options with defaults 
(CASSANDRA-4996, 5066)
 - * Avoid error opening data file on startup (CASSANDRA-4984)
 - * Fix wrong index_options in cli 'show schema' (CASSANDRA-5008)
 - * Allow overriding number of available processor (CASSANDRA-4790)
  
  
 -1.1.7
 - * cqlsh: improve COPY FROM performance (CASSANDRA-4921)
 +1.2-beta3
 + * make consistency level configurable in cqlsh (CASSANDRA-4829)
 + * fix cqlsh rendering of blob fields (CASSANDRA-4970)
 + * fix cqlsh DESCRIBE command (CASSANDRA-4913)
 + * save truncation position in system table (CASSANDRA-4906)
 + * Move CompressionMetadata off-heap (CASSANDRA-4937)
 + * allow CLI to GET cql3 columnfamily data (CASSANDRA-4924)
 + * Fix rare race condition in getExpireTimeForEndpoint (CASSANDRA-4402)
 + * acquire references to overlapping sstables during compaction so bloom 
filter
 +   doesn't get free'd prematurely (CASSANDRA-4934)
 + * Don't share slice query filter in CQL3 SelectStatement (CASSANDRA-4928)
 + * Separate tracing from Log4J (CASSANDRA-4861)
 + * Exclude gcable tombstones from merkle-tree computation (CASSANDRA-4905)
 + * Better printing of AbstractBounds for tracing (CASSANDRA-4931)
 + * Optimize mostRecentTombstone check in CC.collectAllData (CASSANDRA-4883)
 + * Change stream session ID to UUID to avoid collision from same node 
(CASSANDRA-4813)
 + * Use Stats.db when bulk loading if present (CASSANDRA-4957)
 + * Skip repair on system_trace and keyspaces with RF=1 (CASSANDRA-4956)
 + * (cql3) Remove arbitrary SELECT limit (CASSANDRA-4918)
 + * Correctly handle prepared operation on collections (CASSANDRA-4945)
 + * Fix CQL3 LIMIT (CASSANDRA-4877)
 + * Fix Stress for CQL3 (CASSANDRA-4979)
 + * Remove cassandra specific exceptions from JMX interface (CASSANDRA-4893)
 + * (CQL3) Force using ALLOW FILTERING on potentially inefficient queries 
(CASSANDRA-4915)
 + * (cql3) Fix adding column when the table has collections (CASSANDRA-4982)
 + * (cql3) Fix allowing collections with compact storage (CASSANDRA-4990)
 + * (cql3) Refuse ttl/writetime function on collections (CASSANDRA-4992)
 + * Replace IAuthority with new IAuthorizer (CASSANDRA-4874)
 + * clqsh: fix KEY pseudocolumn escaping when describing Thrift tables
 +   in CQL3 mode (CASSANDRA-4955)
   * add basic authentication support for Pig CassandraStorage (CASSANDRA-3042)
   * fix CQL2 ALTER TABLE compaction_strategy_class altering (CASSANDRA-4965)
 +Merged from 1.1:
 + * Fall back to old describe_splits if d_s_ex is not available 
(CASSANDRA-4803)
 + * Improve error reporting when streaming ranges fail (CASSANDRA-5009)
 + * Fix cqlsh timestamp formatting of timezone info (CASSANDRA-4746)
 + * Fix assertion failure with leveled compaction (CASSANDRA-4799)
 + * Check for null end_token in get_range_slice (CASSANDRA-4804)
 + * Remove all remnants of removed nodes (CASSANDRA-4840)
 + * Add aut-reloading of the log4j file in debian package (CASSANDRA-4855)
 + * Fix estimated row cache entry size (CASSANDRA-4860)
 + * reset getRangeSlice filter after finishing a row for get_paged_slice
 +   (CASSANDRA-4919)
   * expunge row cache post-truncate (CASSANDRA-4940)
 - * remove IAuthority2 (CASSANDRA-4875)
 + * Allow static CF definition with compact storage (CASSANDRA-4910)
 + * Fix endless loop/compaction of schema_* CFs due to broken timestamps 
(CASSANDRA-4880)
 + * Fix 'wrong class type' assertion in CounterColumn (CASSANDRA-4976)
 +
 +
 +1.2-beta2
 + * fp rate of 1.0 disables BF entirely; LCS defaults to 1.0 (CASSANDRA-4876)
 + * off-heap bloom filters for row keys (CASSANDRA_4865)
 + * add extension point for sstable components (CASSANDRA-4049)
 + * improve tracing output (CASSANDRA-4852, 4862)
 + * make TRACE verb droppable (CASSANDRA-4672)
 + * fix BulkLoader recognition of CQL3 columnfamilies (CASSANDRA-4755)
 + * Sort commitlog segments for replay by id instead of mtime (CASSANDRA-4793)
 + * Make hint delivery asynchronous (CASSANDRA-4761)
 + * Pluggable Thrift transport factories for CLI and cqlsh (CASSANDRA-4609, 
4610)
 + * cassandra-cli: allow Double value type to be inserted to a column 
(CASSANDRA-4661)
 + * Add ability to use custom TServerFactory implementations (CASSANDRA-4608)
 + * optimize batchlog flushing to skip successful batches (CASSANDRA-4667)
 + * include metadata for system keyspace itself in schema tables 
(CASSANDRA-4416)
 + * add check to PropertyFileSnitch to verify presence of location for
 +   local node (CASSANDRA-4728)
 + * add PBSPredictor consistency modeler (CASSANDRA-4261)
 + * remove vestiges of Thrift unframed mode (CASSANDRA-4729)
 + * optimize single-row PK lookups (CASSANDRA-4710)
 + * adjust blockFor calculation to account for pending ranges due to node 
 +   movement (CASSANDRA-833)
 + * Change CQL version to 3.0.0 and stop accepting 3.0.0-beta1 (CASSANDRA-4649)
 + * (CQL3) Make prepared statement global instead of per connection 
 +   (CASSANDRA-4449)
 + * Fix scrubbing of CQL3 created tables (CASSANDRA-4685)
 + * (CQL3) Fix validation when using counter and regular columns in the same 
 +   table (CASSANDRA-4706)
 + * Fix bug starting Cassandra with simple authentication (CASSANDRA-4648)
 + * Add support for batchlog in CQL3 (CASSANDRA-4545, 4738)
 + * Add support for multiple column family outputs in CFOF (CASSANDRA-4208)
 + * Support repairing only the local DC nodes (CASSANDRA-4747)
 + * Use rpc_address for binary protocol and change default port (CASSANRA-4751)
 + * Fix use of collections in prepared statements (CASSANDRA-4739)
 + * Store more information into peers table (CASSANDRA-4351, 4814)
 + * Configurable bucket size for size tiered compaction (CASSANDRA-4704)
 + * Run leveled compaction in parallel (CASSANDRA-4310)
 + * Fix potential NPE during CFS reload (CASSANDRA-4786)
 + * Composite indexes may miss results (CASSANDRA-4796)
 + * Move consistency level to the protocol level (CASSANDRA-4734, 4824)
 + * Fix Subcolumn slice ends not respected (CASSANDRA-4826)
 + * Fix Assertion error in cql3 select (CASSANDRA-4783)
 + * Fix list prepend logic (CQL3) (CASSANDRA-4835)
 + * Add booleans as literals in CQL3 (CASSANDRA-4776)
 + * Allow renaming PK columns in CQL3 (CASSANDRA-4822)
 + * Fix binary protocol NEW_NODE event (CASSANDRA-4679)
 + * Fix potential infinite loop in tombstone compaction (CASSANDRA-4781)
 + * Remove system tables accounting from schema (CASSANDRA-4850)
 + * (cql3) Force provided columns in clustering key order in 
 +   'CLUSTERING ORDER BY' (CASSANDRA-4881)
 + * Fix composite index bug (CASSANDRA-4884)
 + * Fix short read protection for CQL3 (CASSANDRA-4882)
 + * Add tracing support to the binary protocol (CASSANDRA-4699)
 + * (cql3) Don't allow prepared marker inside collections (CASSANDRA-4890)
 + * Re-allow order by on non-selected columns (CASSANDRA-4645)
 + * Bug when composite index is created in a table having collections 
(CASSANDRA-4909)
 + * log index scan subject in CompositesSearcher (CASSANDRA-4904)
 +Merged from 1.1:
   * add get[Row|Key]CacheEntries to CacheServiceMBean (CASSANDRA-4859)
   * fix get_paged_slice to wrap to next row correctly (CASSANDRA-4816)
   * fix indexing empty column values (CASSANDRA-4832)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18a1a4b9/src/java/org/apache/cassandra/thrift/CassandraServer.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/thrift/CassandraServer.java
index 7baeb17,4e61e9a..7e5faf9
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@@ -463,83 -416,48 +463,83 @@@ public class CassandraServer implement
          }
          else
          {
 -            pageSize = COUNT_PAGE_SIZE;
 +            logger.debug("get_count");
          }
  
 -        int totalCount = 0;
 -        List<ColumnOrSuperColumn> columns;
 -
 -        if (predicate.slice_range == null)
 +        try
          {
 -            predicate.slice_range = new 
SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER,
 -                                                   
ByteBufferUtil.EMPTY_BYTE_BUFFER,
 -                                                   false,
 -                                                   Integer.MAX_VALUE);
 -        }
 +            ThriftClientState cState = state();
 +            String keyspace = cState.getKeyspace();
 +            cState.hasColumnFamilyAccess(keyspace, 
column_parent.column_family, Permission.SELECT);
 +            Table table = Table.open(keyspace);
 +            ColumnFamilyStore cfs = 
table.getColumnFamilyStore(column_parent.column_family);
  
 -        final int requestedCount = predicate.slice_range.count;
 -        int remaining = requestedCount;
 -        int pages = 0;
 -        while (true)
 -        {
 -            predicate.slice_range.count = Math.min(pageSize, Math.max(2, 
remaining)); // fetch at least two columns
 -            columns = get_slice(key, column_parent, predicate, 
consistency_level);
 -            if (columns.isEmpty())
 -                break;
 +            if (predicate.column_names != null)
 +                return get_slice(key, column_parent, predicate, 
consistency_level).size();
  
 -            ByteBuffer firstName = getName(columns.get(0));
 -            int newColumns = pages == 0 || 
!firstName.equals(predicate.slice_range.start) ? columns.size() : 
columns.size() - 1;
 -            totalCount += newColumns;
 -            // if we over-counted, just return original limit
 -            if (totalCount > requestedCount)
 -                return requestedCount;
 -            remaining -= newColumns;
 -            pages++;
 -            // We're done if either:
 -            //   - We've querying the number of columns requested by the user
 -            //   - last fetched page only contains the column we already 
fetched
 -            if (remaining == 0 || ((columns.size() == 1) && 
(firstName.equals(predicate.slice_range.start))))
 -                break;
 +            int pageSize;
 +            // request by page if this is a large row
 +            if (cfs.getMeanColumns() > 0)
 +            {
 +                int averageColumnSize = (int) (cfs.getMeanRowSize() / 
cfs.getMeanColumns());
 +                pageSize = Math.min(COUNT_PAGE_SIZE, 
DatabaseDescriptor.getInMemoryCompactionLimit() / averageColumnSize);
 +                pageSize = Math.max(2, pageSize);
 +                logger.debug("average row column size is {}; using pageSize 
of {}", averageColumnSize, pageSize);
 +            }
              else
 -                predicate.slice_range.start = 
getName(columns.get(columns.size() - 1));
 -        }
 +            {
 +                pageSize = COUNT_PAGE_SIZE;
 +            }
 +
 +            int totalCount = 0;
 +            List<ColumnOrSuperColumn> columns;
  
 -        return totalCount;
 +            if (predicate.slice_range == null)
 +            {
 +                predicate.slice_range = new 
SliceRange(ByteBufferUtil.EMPTY_BYTE_BUFFER,
 +                                                       
ByteBufferUtil.EMPTY_BYTE_BUFFER,
 +                                                       false,
 +                                                       Integer.MAX_VALUE);
 +            }
 +
 +            final int requestedCount = predicate.slice_range.count;
 +            int remaining = requestedCount;
 +            int pages = 0;
 +            while (true)
 +            {
 +                predicate.slice_range.count = Math.min(pageSize, Math.max(2, 
remaining)); // fetch at least two columns
 +                columns = get_slice(key, column_parent, predicate, 
consistency_level);
 +                if (columns.isEmpty())
 +                    break;
 +
 +                ByteBuffer firstName = getName(columns.get(0));
 +                int newColumns = pages == 0 || 
!firstName.equals(predicate.slice_range.start) ? columns.size() : 
columns.size() - 1;
 +
 +                totalCount += newColumns;
 +                // if we over-counted, just return original limit
 +                if (totalCount > requestedCount)
 +                    return requestedCount;
 +                remaining -= newColumns;
 +                pages++;
 +                // We're done if either:
 +                // - We've querying the number of columns requested by the 
user
-                 // - The last page wasn't full
-                 if (remaining == 0 || columns.size() < 
predicate.slice_range.count)
++                // - last fetched page only contains the column we already 
fetched
++                if (remaining == 0 || ((columns.size() == 1) && 
(firstName.equals(predicate.slice_range.start))))
 +                    break;
 +                else
 +                    predicate.slice_range.start = 
getName(columns.get(columns.size() - 1));
 +            }
 +
 +            return totalCount;
 +        }
 +        catch (RequestValidationException e)
 +        {
 +            throw ThriftConversion.toThrift(e);
 +        }
 +        finally
 +        {
 +            Tracing.instance().stopSession();
 +        }
      }
  
      private static ByteBuffer getName(ColumnOrSuperColumn cosc)

Reply via email to