Will Berkeley has uploaded this change for review. ( 
http://gerrit.cloudera.org:8080/12489


Change subject: Enable --rowset_metadata_store_keys by default
......................................................................

Enable --rowset_metadata_store_keys by default

While starting up a cluster where each tablet server had ~4000 replicas,
I noticed many replicas spent seconds opening. I traced this to having
to read the rowset bounds for each rowset from disk. It appears to be
the only I/O that occurs when opening the tablet replica. I think we
should try turning this flag off (at the beginning of the 1.10 release
cycle) to see if it would be a good idea to have it on by default. We
can always revert this change if there are problems.

Expected benefit:
1. Significantly decreased startup time when there are a lot of
   replicas. See c127477b52 for some numbers.

Expected cost:
1. Bigger tablet metdata files, and therefore bigger flushes of tablet
   metadata. Note that tablet metadata size is already (in part)
   proportional to the number of rowsets, so the metadata will not grow
   differently.

Also:
Good: This is backwards compatible-- old servers will ignore the new
      field and read the bounds from disk.
Good: This is forwards compatible-- new servers will read the bounds
      from disk if the new field is missing.
Bad:  This won't affect rowsets that are already on disk, so read-only
      tablets written in previous versions will never see any benefit
      from this patch.

Change-Id: I2c4ac4b58845666cd10102fb125fc787c637e473
---
M src/kudu/tablet/diskrowset.cc
1 file changed, 1 insertion(+), 2 deletions(-)



  git pull ssh://gerrit.cloudera.org:29418/kudu refs/changes/89/12489/1
--
To view, visit http://gerrit.cloudera.org:8080/12489
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: kudu
Gerrit-Branch: master
Gerrit-MessageType: newchange
Gerrit-Change-Id: I2c4ac4b58845666cd10102fb125fc787c637e473
Gerrit-Change-Number: 12489
Gerrit-PatchSet: 1
Gerrit-Owner: Will Berkeley <[email protected]>

Reply via email to