hbase git commit: Update Misty's timezone

2017-09-29 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/master 4136ab338 -> b0e1a1509


Update Misty's timezone

Signed-off-by: Misty Stanley-Jones <mi...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/b0e1a150
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/b0e1a150
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/b0e1a150

Branch: refs/heads/master
Commit: b0e1a150928a5eb5dca2f933dc55ac206977f3ca
Parents: 4136ab3
Author: Misty Stanley-Jones <mi...@apache.org>
Authored: Fri Sep 29 10:37:45 2017 -0700
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Fri Sep 29 10:37:45 2017 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/b0e1a150/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 13be3ab..272e4d4 100755
--- a/pom.xml
+++ b/pom.xml
@@ -431,7 +431,7 @@
   misty
   Misty Stanley-Jones
   mi...@apache.org
-  +10
+  -8
 
 
   ndimiduk



hbase git commit: HBASE-18635 Fixed Asciidoc warning

2017-08-25 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/master 2e8739623 -> 368591dfc


HBASE-18635 Fixed Asciidoc warning

Signed-off-by: Misty Stanley-Jones <mi...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/368591df
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/368591df
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/368591df

Branch: refs/heads/master
Commit: 368591dfcd1fb4ec2be480b0970466386b059b82
Parents: 2e87396
Author: Jan Hentschel <jan.hentsc...@ultratendency.com>
Authored: Sun Aug 20 23:47:11 2017 +0200
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Fri Aug 25 13:09:57 2017 -0700

--
 src/main/asciidoc/_chapters/external_apis.adoc | 15 +++
 src/main/asciidoc/_chapters/schema_design.adoc | 12 ++--
 2 files changed, 13 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/368591df/src/main/asciidoc/_chapters/external_apis.adoc
--
diff --git a/src/main/asciidoc/_chapters/external_apis.adoc 
b/src/main/asciidoc/_chapters/external_apis.adoc
index 2f85461..c0e4a5f 100644
--- a/src/main/asciidoc/_chapters/external_apis.adoc
+++ b/src/main/asciidoc/_chapters/external_apis.adoc
@@ -288,18 +288,17 @@ your filter to the file. For example, to return only rows 
for
 which keys start with u123 and use a batch size
 of 100, the filter file would look like this:
 
-+++
-
-Scanner batch="100"
-  filter
+[source,xml]
+
+
+  
 {
   "type": "PrefixFilter",
   "value": "u123"
 }
-  /filter
-/Scanner
-
-+++
+  
+
+
 
 Pass the file to the `-d` argument of the `curl` request.
 |curl -vi -X PUT \

http://git-wip-us.apache.org/repos/asf/hbase/blob/368591df/src/main/asciidoc/_chapters/schema_design.adoc
--
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc 
b/src/main/asciidoc/_chapters/schema_design.adoc
index cef05f2..d17f06b 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -1113,7 +1113,7 @@ If you don't have time to build it both ways and compare, 
my advice would be to
 [[schema.ops]]
 == Operational and Performance Configuration Options
 
-  Tune HBase Server RPC Handling
+===  Tune HBase Server RPC Handling
 
 * Set `hbase.regionserver.handler.count` (in `hbase-site.xml`) to cores x 
spindles for concurrency.
 * Optionally, split the call queues into separate read and write queues for 
differentiated service. The parameter 
`hbase.ipc.server.callqueue.handler.factor` specifies the number of call queues:
@@ -1129,7 +1129,7 @@ If you don't have time to build it both ways and compare, 
my advice would be to
 - `< 0.5` for more short-read
 - `> 0.5` for more long-read
 
-  Disable Nagle for RPC
+===  Disable Nagle for RPC
 
 Disable Nagle’s algorithm. Delayed ACKs can add up to ~200ms to RPC round 
trip time. Set the following parameters:
 
@@ -1140,7 +1140,7 @@ Disable Nagle’s algorithm. Delayed ACKs can add up to 
~200ms to RPC round trip
 - `hbase.ipc.client.tcpnodelay = true`
 - `hbase.ipc.server.tcpnodelay = true`
 
-  Limit Server Failure Impact
+===  Limit Server Failure Impact
 
 Detect regionserver failure as fast as reasonable. Set the following 
parameters:
 
@@ -1149,7 +1149,7 @@ Detect regionserver failure as fast as reasonable. Set 
the following parameters:
 - `dfs.namenode.avoid.read.stale.datanode = true`
 - `dfs.namenode.avoid.write.stale.datanode = true`
 
-  Optimize on the Server Side for Low Latency
+===  Optimize on the Server Side for Low Latency
 
 * Skip the network for local blocks. In `hbase-site.xml`, set the following 
parameters:
 - `dfs.client.read.shortcircuit = true`
@@ -1187,7 +1187,7 @@ Detect regionserver failure as fast as reasonable. Set 
the following parameters:
 
 ==  Special Cases
 
-  For applications where failing quickly is better than waiting
+===  For applications where failing quickly is better than waiting
 
 *  In `hbase-site.xml` on the client side, set the following parameters:
 - Set `hbase.client.pause = 1000`
@@ -1196,7 +1196,7 @@ Detect regionserver failure as fast as reasonable. Set 
the following parameters:
 - Set the RecoverableZookeeper retry count: `zookeeper.recovery.retry = 1` (no 
retry)
 * In `hbase-site.xml` on the server side, set the Zookeeper session timeout 
for detecting server failures: `zookeeper.session.timeout` <= 30 seconds (20-30 
is good).
 
-  For applications that can tolerate slightly out of date information
+===  For applications that can tolerate slightly out of date information
 
 **HBase tim

hbase git commit: HBASE-18548 Move sources of website gen and check jobs into source control

2017-08-10 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/master ded0842ca -> 6114824b5


HBASE-18548 Move sources of website gen and check jobs into source control


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6114824b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6114824b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6114824b

Branch: refs/heads/master
Commit: 6114824b53c1299d2d79572800fb1e1cbc96cb66
Parents: ded0842
Author: Misty Stanley-Jones <mi...@apache.org>
Authored: Wed Aug 9 14:34:46 2017 -0700
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Thu Aug 10 14:48:14 2017 -0700

--
 CHANGES.txt |   0
 LICENSE.txt |   0
 NOTICE.txt  |   0
 README.txt  |   0
 .../jenkins-scripts/check-website-links.sh  |  47 ++
 .../jenkins-scripts/generate-hbase-website.sh   | 152 +++
 pom.xml |   0
 .../appendix_contributing_to_documentation.adoc |  34 ++---
 8 files changed, 216 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6114824b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
old mode 100644
new mode 100755

http://git-wip-us.apache.org/repos/asf/hbase/blob/6114824b/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
old mode 100644
new mode 100755

http://git-wip-us.apache.org/repos/asf/hbase/blob/6114824b/NOTICE.txt
--
diff --git a/NOTICE.txt b/NOTICE.txt
old mode 100644
new mode 100755

http://git-wip-us.apache.org/repos/asf/hbase/blob/6114824b/README.txt
--
diff --git a/README.txt b/README.txt
old mode 100644
new mode 100755

http://git-wip-us.apache.org/repos/asf/hbase/blob/6114824b/dev-support/jenkins-scripts/check-website-links.sh
--
diff --git a/dev-support/jenkins-scripts/check-website-links.sh 
b/dev-support/jenkins-scripts/check-website-links.sh
new file mode 100755
index 000..c23abbb
--- /dev/null
+++ b/dev-support/jenkins-scripts/check-website-links.sh
@@ -0,0 +1,47 @@
+#!/bin/bash
+
+# This script is designed to run as a Jenkins job, such as at
+# https://builds.apache.org/view/All/job/HBase%20Website%20Link%20Checker/
+#
+# It generates artifacts which the Jenkins job then can mail out and/or 
archive.
+#
+# We download a specific version of linklint because the original has bugs and
+# is not well maintained.
+#
+# See http://www.linklint.org/doc/inputs.html for linklint options
+
+# Clean up the workspace
+rm -rf *.zip > /dev/null
+rm -rf linklint > /dev/null
+rm -Rf link_report
+
+# This version of linklint fixes some bugs in the now-unmaintained 2.3.5 
version
+wget http://ingo-karkat.de/downloads/tools/linklint/linklint-2.3.5_ingo_020.zip
+unzip linklint-2.3.5_ingo_020.zip
+chmod +x linklint/linklint.pl
+
+# Run the checker
+echo "Checking http://hbase.apache.org and saving report to link_report/"
+echo "Excluding /testapidocs/ because some tests use private classes not 
published in /apidocs/."
+# Check internal structure
+linklint/linklint.pl -http \
+ -host hbase.apache.org \
+ /@ \
+ -skip /testapidocs/@ \
+ -skip /testdevapidocs/@ \
+ -net \
+ -redirect \
+ -no_query_string \
+ -htmlonly \
+ -timeout 30 \
+ -delay 1 \
+ -limit 10 \
+ -doc link_report
+
+# Detect whether we had errors and act accordingly
+if ! grep -q 'ERROR' link_report/index.html; then
+  echo "Errors found. Sending email."
+  exit 1
+else
+  echo "No errors found. Warnings might be present."
+fi
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/6114824b/dev-support/jenkins-scripts/generate-hbase-website.sh
--
diff --git a/dev-support/jenkins-scripts/generate-hbase-website.sh 
b/dev-support/jenkins-scripts/generate-hbase-website.sh
new file mode 100644
index 000..a3f7823
--- /dev/null
+++ b/dev-support/jenkins-scripts/generate-hbase-website.sh
@@ -0,0 +1,152 @@
+#!/bin/bash
+
+# This script is meant to run as part of a Jenkins job such as
+# https://builds.apache.org/job/hbase_generate_website/
+#
+

hbase git commit: HBASE-18430 fixed typo

2017-07-21 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/master 31c3edaa2 -> 70a357dc5


HBASE-18430 fixed typo

Signed-off-by: Misty Stanley-Jones <mi...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/70a357dc
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/70a357dc
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/70a357dc

Branch: refs/heads/master
Commit: 70a357dc5cc74ae6a354c907959f644f563aeee4
Parents: 31c3eda
Author: coral <co...@cloudera.com>
Authored: Fri Jul 21 14:51:00 2017 -0500
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Fri Jul 21 13:46:38 2017 -0700

--
 .../asciidoc/_chapters/appendix_contributing_to_documentation.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/70a357dc/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
--
diff --git 
a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc 
b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
index 0d68dce..0337182 100644
--- a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
+++ b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
@@ -55,7 +55,7 @@ see <<developer,developer>>.
 If you spot an error in a string in a UI, utility, script, log message, or 
elsewhere,
 or you think something could be made more clear, or you think text needs to be 
added
 where it doesn't currently exist, the first step is to file a JIRA. Be sure to 
set
-the component to `Documentation` in addition any other involved components. 
Most
+the component to `Documentation` in addition to any other involved components. 
Most
 components have one or more default owners, who monitor new issues which come 
into
 those queues. Regardless of whether you feel able to fix the bug, you should 
still
 file bugs where you see them.



hbase git commit: HBASE-18332 Upgrade asciidoctor-maven-plugin

2017-07-17 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/master 2d5a0fbd1 -> c423dc795


HBASE-18332 Upgrade asciidoctor-maven-plugin

Update asciidoctor-maven-plugin to 1.5.5 and asciidoctorj-pdf to 1.5.0-alpha.15
asciidoctor's pdfmark generation is turned off
Modify title-logo tag to title-logo-image

Signed-off-by: Misty Stanley-Jones <mi...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c423dc79
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c423dc79
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c423dc79

Branch: refs/heads/master
Commit: c423dc7950c4746220498b0e0b8884c51c51e77e
Parents: 2d5a0fb
Author: Peter Somogyi <psomo...@cloudera.com>
Authored: Fri Jul 7 13:54:41 2017 +0200
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Mon Jul 17 19:05:53 2017 -0700

--
 pom.xml | 5 ++---
 src/main/asciidoc/book.adoc | 2 +-
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c423dc79/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 329c468..9554d85 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1265,7 +1265,6 @@
 
   
 
-
   
 
 
@@ -1391,8 +1390,8 @@
 2.12.0
 
 0.12
-1.5.2.1
-1.5.0-alpha.6
+1.5.5
+1.5.0-alpha.15
 3.0.0
 1.4
 6.18

http://git-wip-us.apache.org/repos/asf/hbase/blob/c423dc79/src/main/asciidoc/book.adoc
--
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index e5898d5..2b9bf26 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -26,7 +26,7 @@
 :Version: {docVersion}
 :revnumber: {docVersion}
 // Logo for PDF -- doesn't render in HTML
-:title-logo: hbase_logo_with_orca.png
+:title-logo-image: image:hbase_logo_with_orca.png[pdfwidth=4.25in,align=center]
 :numbered:
 :toc: left
 :toclevels: 1



hbase git commit: HBASE-18332 Upgrade asciidoctor-maven-plugin

2017-07-17 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/branch-2 4663d7b9a -> 80959b452


HBASE-18332 Upgrade asciidoctor-maven-plugin

Update asciidoctor-maven-plugin to 1.5.5 and asciidoctorj-pdf to 1.5.0-alpha.15
asciidoctor's pdfmark generation is turned off
Modify title-logo tag to title-logo-image


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/80959b45
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/80959b45
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/80959b45

Branch: refs/heads/branch-2
Commit: 80959b45286f3f8163e463358aeaf8342c002228
Parents: 4663d7b
Author: Peter Somogyi <psomo...@cloudera.com>
Authored: Fri Jul 7 13:54:41 2017 +0200
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Mon Jul 17 19:06:30 2017 -0700

--
 pom.xml | 5 ++---
 src/main/asciidoc/book.adoc | 2 +-
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/80959b45/pom.xml
--
diff --git a/pom.xml b/pom.xml
index f3e17f7..a03386f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1253,7 +1253,6 @@
 
   
 
-
   
 
 
@@ -1379,8 +1378,8 @@
 2.12.0
 
 0.12
-1.5.2.1
-1.5.0-alpha.6
+1.5.5
+1.5.0-alpha.15
 3.0.0
 1.4
 6.18

http://git-wip-us.apache.org/repos/asf/hbase/blob/80959b45/src/main/asciidoc/book.adoc
--
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index e5898d5..2b9bf26 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -26,7 +26,7 @@
 :Version: {docVersion}
 :revnumber: {docVersion}
 // Logo for PDF -- doesn't render in HTML
-:title-logo: hbase_logo_with_orca.png
+:title-logo-image: image:hbase_logo_with_orca.png[pdfwidth=4.25in,align=center]
 :numbered:
 :toc: left
 :toclevels: 1



hbase git commit: HBASE-12794 Guidelines for filing JIRA issues

2017-06-26 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/branch-2 289938337 -> 44c9c1de9


HBASE-12794 Guidelines for filing JIRA issues

Signed-off-by: Misty Stanley-Jones <mi...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/44c9c1de
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/44c9c1de
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/44c9c1de

Branch: refs/heads/branch-2
Commit: 44c9c1de9941d72ab1801eed93288aa65fd1828d
Parents: 2899383
Author: Misty Stanley-Jones <mi...@apache.org>
Authored: Fri Jun 23 15:05:42 2017 -0700
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Mon Jun 26 08:32:31 2017 -0700

--
 src/main/asciidoc/_chapters/developer.adoc | 87 +++--
 1 file changed, 82 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/44c9c1de/src/main/asciidoc/_chapters/developer.adoc
--
diff --git a/src/main/asciidoc/_chapters/developer.adoc 
b/src/main/asciidoc/_chapters/developer.adoc
index 50b9c74..46746a1 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -67,13 +67,90 @@ FreeNode offers a web-based client, but most people prefer 
a native client, and
 Check for existing issues in 
link:https://issues.apache.org/jira/browse/HBASE[Jira].
 If it's either a new feature request, enhancement, or a bug, file a ticket.
 
+We track multiple types of work in JIRA:
+
+- Bug: Something is broken in HBase itself.
+- Test: A test is needed, or a test is broken.
+- New feature: You have an idea for new functionality. It's often best to bring
+  these up on the mailing lists first, and then write up a design specification
+  that you add to the feature request JIRA.
+- Improvement: A feature exists, but could be tweaked or augmented. It's often
+  best to bring these up on the mailing lists first and have a discussion, then
+  summarize or link to the discussion if others seem interested in the
+  improvement.
+- Wish: This is like a new feature, but for something you may not have the
+  background to flesh out yourself.
+
+Bugs and tests have the highest priority and should be actionable.
+
+ Guidelines for reporting effective issues
+
+- *Search for duplicates*: Your issue may have already been reported. Have a
+  look, realizing that someone else might have worded the summary differently.
++
+Also search the mailing lists, which may have information about your problem
+and how to work around it. Don't file an issue for something that has already
+been discussed and resolved on a mailing list, unless you strongly disagree
+with the resolution *and* are willing to help take the issue forward.
+
+* *Discuss in public*: Use the mailing lists to discuss what you've discovered
+  and see if there is something you've missed. Avoid using back channels, so
+  that you benefit from the experience and expertise of the project as a whole.
+
+* *Don't file on behalf of others*: You might not have all the context, and you
+  don't have as much motivation to see it through as the person who is actually
+  experiencing the bug. It's more helpful in the long term to encourage others
+  to file their own issues. Point them to this material and offer to help out
+  the first time or two.
+
+* *Write a good summary*: A good summary includes information about the 
problem,
+  the impact on the user or developer, and the area of the code.
+** Good: `Address new license dependencies from hadoop3-alpha4`
+** Room for improvement: `Canary is broken`
++
+If you write a bad title, someone else will rewrite it for you. This is time
+they could have spent working on the issue instead.
+
+* *Give context in the description*: It can be good to think of this in 
multiple
+  parts:
+** What happens or doesn't happen?
+** How does it impact you?
+** How can someone else reproduce it?
+** What would "fixed" look like?
++
+You don't need to know the answers for all of these, but give as much
+information as you can. If you can provide technical information, such as a
+Git commit SHA that you think might have caused the issue or a build failure
+on builds.apache.org where you think the issue first showed up, share that
+info.
+
+* *Fill in all relevant fields*: These fields help us filter, categorize, and
+  find things.
+
+* *One bug, one issue, one patch*: To help with back-porting, don't split 
issues
+  or fixes among multiple bugs.
+
+* *Add value if you can*: Filing issues is great, even if you don't know how to
+  fix them. But providing as much information as possible, being willing to
+  triage and answer questions, and being willing to test potential fixes is 
even
+  better! We want to f

hbase git commit: HBASE-12794 Guidelines for filing JIRA issues

2017-06-26 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/master 2d781aa15 -> ed70f15b1


HBASE-12794 Guidelines for filing JIRA issues

Signed-off-by: Misty Stanley-Jones <mi...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ed70f15b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ed70f15b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ed70f15b

Branch: refs/heads/master
Commit: ed70f15b1e869fb3f643feed04338491332058ba
Parents: 2d781aa
Author: Misty Stanley-Jones <mi...@apache.org>
Authored: Fri Jun 23 15:05:42 2017 -0700
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Mon Jun 26 08:27:32 2017 -0700

--
 src/main/asciidoc/_chapters/developer.adoc | 87 +++--
 1 file changed, 82 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ed70f15b/src/main/asciidoc/_chapters/developer.adoc
--
diff --git a/src/main/asciidoc/_chapters/developer.adoc 
b/src/main/asciidoc/_chapters/developer.adoc
index 50b9c74..46746a1 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -67,13 +67,90 @@ FreeNode offers a web-based client, but most people prefer 
a native client, and
 Check for existing issues in 
link:https://issues.apache.org/jira/browse/HBASE[Jira].
 If it's either a new feature request, enhancement, or a bug, file a ticket.
 
+We track multiple types of work in JIRA:
+
+- Bug: Something is broken in HBase itself.
+- Test: A test is needed, or a test is broken.
+- New feature: You have an idea for new functionality. It's often best to bring
+  these up on the mailing lists first, and then write up a design specification
+  that you add to the feature request JIRA.
+- Improvement: A feature exists, but could be tweaked or augmented. It's often
+  best to bring these up on the mailing lists first and have a discussion, then
+  summarize or link to the discussion if others seem interested in the
+  improvement.
+- Wish: This is like a new feature, but for something you may not have the
+  background to flesh out yourself.
+
+Bugs and tests have the highest priority and should be actionable.
+
+ Guidelines for reporting effective issues
+
+- *Search for duplicates*: Your issue may have already been reported. Have a
+  look, realizing that someone else might have worded the summary differently.
++
+Also search the mailing lists, which may have information about your problem
+and how to work around it. Don't file an issue for something that has already
+been discussed and resolved on a mailing list, unless you strongly disagree
+with the resolution *and* are willing to help take the issue forward.
+
+* *Discuss in public*: Use the mailing lists to discuss what you've discovered
+  and see if there is something you've missed. Avoid using back channels, so
+  that you benefit from the experience and expertise of the project as a whole.
+
+* *Don't file on behalf of others*: You might not have all the context, and you
+  don't have as much motivation to see it through as the person who is actually
+  experiencing the bug. It's more helpful in the long term to encourage others
+  to file their own issues. Point them to this material and offer to help out
+  the first time or two.
+
+* *Write a good summary*: A good summary includes information about the 
problem,
+  the impact on the user or developer, and the area of the code.
+** Good: `Address new license dependencies from hadoop3-alpha4`
+** Room for improvement: `Canary is broken`
++
+If you write a bad title, someone else will rewrite it for you. This is time
+they could have spent working on the issue instead.
+
+* *Give context in the description*: It can be good to think of this in 
multiple
+  parts:
+** What happens or doesn't happen?
+** How does it impact you?
+** How can someone else reproduce it?
+** What would "fixed" look like?
++
+You don't need to know the answers for all of these, but give as much
+information as you can. If you can provide technical information, such as a
+Git commit SHA that you think might have caused the issue or a build failure
+on builds.apache.org where you think the issue first showed up, share that
+info.
+
+* *Fill in all relevant fields*: These fields help us filter, categorize, and
+  find things.
+
+* *One bug, one issue, one patch*: To help with back-porting, don't split 
issues
+  or fixes among multiple bugs.
+
+* *Add value if you can*: Filing issues is great, even if you don't know how to
+  fix them. But providing as much information as possible, being willing to
+  triage and answer questions, and being willing to test potential fixes is 
even
+  better! We want to fix yo

hbase git commit: Add diversity statement to CoC

2017-03-31 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/master 7700a7fac -> 1c4d9c896


Add diversity statement to CoC

Signed-off-by: Misty Stanley-Jones <mi...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1c4d9c89
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1c4d9c89
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1c4d9c89

Branch: refs/heads/master
Commit: 1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7
Parents: 7700a7f
Author: Misty Stanley-Jones <mi...@apache.org>
Authored: Fri Mar 31 12:49:02 2017 -0700
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Fri Mar 31 12:49:37 2017 -0700

--
 src/main/site/xdoc/coc.xml | 46 +
 1 file changed, 42 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/1c4d9c89/src/main/site/xdoc/coc.xml
--
diff --git a/src/main/site/xdoc/coc.xml b/src/main/site/xdoc/coc.xml
index e091507..fc2b549 100644
--- a/src/main/site/xdoc/coc.xml
+++ b/src/main/site/xdoc/coc.xml
@@ -32,13 +32,25 @@ under the License.
   
   
 
-We expect participants in discussions on the HBase project mailing lists, IRC 
channels, and JIRA issues to abide by the Apache Software Foundation's http://apache.org/foundation/policies/conduct.html;>Code of Conduct.
+We expect participants in discussions on the HBase project mailing lists, IRC
+channels, and JIRA issues to abide by the Apache Software Foundation's
+http://apache.org/foundation/policies/conduct.html;>Code of 
Conduct.
 
 
-If you feel there had been a violation of this code, please point out your 
concerns publicly in a friendly and matter of fact manner. Nonverbal 
communication is prone to misinterpretation and misunderstanding. Everyone has 
bad days and sometimes says things they regret later. Someone else's 
communication style may clash with yours, but the difference can be amicably 
resolved. After pointing out your concerns please be generous upon receiving an 
apology.
+If you feel there has been a violation of this code, please point out your
+concerns publicly in a friendly and matter of fact manner. Nonverbal
+communication is prone to misinterpretation and misunderstanding. Everyone has
+bad days and sometimes says things they regret later. Someone else's
+communication style may clash with yours, but the difference can be amicably
+resolved. After pointing out your concerns please be generous upon receiving an
+apology.
 
 
-Should there be repeated instances of code of conduct violations, or if there 
is an obvious and severe violation, the HBase PMC may become involved. When 
this happens the PMC will openly discuss the matter, most likely on the 
dev@hbase mailing list, and will consider taking the following actions, in 
order, if there is a continuing problem with an individual:
+Should there be repeated instances of code of conduct violations, or if there 
is
+an obvious and severe violation, the HBase PMC may become involved. When this
+happens the PMC will openly discuss the matter, most likely on the dev@hbase
+mailing list, and will consider taking the following actions, in order, if 
there
+is a continuing problem with an individual:
 
 A friendly off-list warning;
 A friendly public warning, if the communication at issue was on list, 
otherwise another off-list warning;
@@ -47,7 +59,33 @@ Should there be repeated instances of code of conduct 
violations, or if there is
 
 
 
-For flagrant violations requiring a firm response the PMC may opt to skip 
early steps. No action will be taken before public discussion leading to 
consensus or a successful majority vote. 
+For flagrant violations requiring a firm response the PMC may opt to skip early
+steps. No action will be taken before public discussion leading to consensus or
+a successful majority vote.
+
+  
+  
+
+As a project and a community, we encourage you to participate in the HBase 
project
+in whatever capacity suits you, whether it involves development, documentation,
+answering questions on mailing lists, triaging issue and patch review, managing
+releases, or any other way that you want to help. We appreciate your
+contributions and the time you dedicate to the HBase project. We strive to
+recognize the work of participants publicly. Please let us know if we can
+improve in this area.
+
+
+We value diversity and strive to support participation by people with all
+different backgrounds. Rich projects grow from groups with different points of
+view and different backgrounds. We welcome your suggestions about how we can
+welcome participation by people at all skill levels and with all aspects of the
+project.
+
+
+If you can think of something we are doing that

[41/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/class-use/TableName.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html 
b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html
index 5868fbe..34f4e86 100644
--- a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html
+++ b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html
@@ -399,31 +399,31 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 TableName
-AsyncTableBase.getName()
+Table.getName()
 Gets the fully qualified table name instance of this 
table.
 
 
 
 TableName
-RegionLocator.getName()
+AsyncTableBase.getName()
 Gets the fully qualified table name instance of this 
table.
 
 
 
 TableName
-BufferedMutator.getName()
-Gets the fully qualified table name instance of the table 
that this BufferedMutator writes to.
+AsyncTableRegionLocator.getName()
+Gets the fully qualified table name instance of the table 
whose region we want to locate.
 
 
 
 TableName
-AsyncTableRegionLocator.getName()
-Gets the fully qualified table name instance of the table 
whose region we want to locate.
+BufferedMutator.getName()
+Gets the fully qualified table name instance of the table 
that this BufferedMutator writes to.
 
 
 
 TableName
-Table.getName()
+RegionLocator.getName()
 Gets the fully qualified table name instance of this 
table.
 
 
@@ -1475,7 +1475,7 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 http://docs.oracle.com/javase/8/docs/api/java/util/SortedSet.html?is-external=true;
 title="class or interface in java.util">SortedSetTableName
 RSGroupInfo.getTables()
-Set of tables that are members of this group
+Get set of tables that are members of the group.
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html 
b/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html
index 9178688..b106144 100644
--- a/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html
+++ b/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html
@@ -111,13 +111,13 @@ var activeTableTab = "activeTableTab";
 
 @InterfaceAudience.Public
  @InterfaceStability.Evolving
-public class ConnectionFactory
+public class ConnectionFactory
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
-A non-instantiable class that manages creation of Connections.
- Managing the lifecycle of the Connections to the cluster is 
the responsibility of
- the caller.
- From a Connection, Table 
implementations are retrieved
- with Connection.getTable(TableName).
 Example:
+A non-instantiable class that manages creation of Connections. Managing the 
lifecycle of
+ the Connections to the cluster is 
the responsibility of the caller. From a
+ Connection, Table 
implementations are retrieved with
+ Connection.getTable(org.apache.hadoop.hbase.TableName).
 Example:
+
  
  Connection connection = ConnectionFactory.createConnection(config);
  Table table = connection.getTable(TableName.valueOf("table1"));
@@ -196,20 +196,20 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 Method and Description
 
 
-static AsyncConnection
+static http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureAsyncConnection
 createAsyncConnection()
 Call createAsyncConnection(Configuration)
 using default HBaseConfiguration.
 
 
 
-static AsyncConnection
+static http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureAsyncConnection
 createAsyncConnection(org.apache.hadoop.conf.Configurationconf)
 Call createAsyncConnection(Configuration,
 User) using the given conf and a
  User object created by UserProvider.
 
 
 
-static AsyncConnection
+static http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFutureAsyncConnection
 createAsyncConnection(org.apache.hadoop.conf.Configurationconf,
  Useruser)
 Create a new AsyncConnection instance using the passed 
conf and user.
@@ -277,7 +277,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 HBASE_CLIENT_ASYNC_CONNECTION_IMPL
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String HBASE_CLIENT_ASYNC_CONNECTION_IMPL
+public static 

[46/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/CellUtil.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/CellUtil.html 
b/apidocs/org/apache/hadoop/hbase/CellUtil.html
index 22c98a3..1c43554 100644
--- a/apidocs/org/apache/hadoop/hbase/CellUtil.html
+++ b/apidocs/org/apache/hadoop/hbase/CellUtil.html
@@ -1278,7 +1278,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By
 
 
 createCellScanner
-public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends 
org.apache.hadoop.hbase.CellScannablecellScannerables)
+public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">List? extends 
org.apache.hadoop.hbase.CellScannablecellScannerables)
 
 Parameters:
 cellScannerables - 
@@ -1293,7 +1293,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By
 
 
 createCellScanner
-public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html?is-external=true;
 title="class or interface in java.lang">IterableCellcellIterable)
+public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html?is-external=true;
 title="class or interface in java.lang">IterableCellcellIterable)
 
 Parameters:
 cellIterable - 
@@ -1308,7 +1308,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By
 
 
 createCellScanner
-public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorCellcells)
+public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorCellcells)
 
 Parameters:
 cells - 
@@ -1324,7 +1324,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By
 
 
 createCellScanner
-public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(Cell[]cellArray)
+public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(Cell[]cellArray)
 
 Parameters:
 cellArray - 
@@ -1339,7 +1339,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By
 
 
 createCellScanner
-public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 Flatten the map of cells out under the CellScanner
 
 Parameters:
@@ -1357,7 +1357,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By
 
 matchingRow
 http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true;
 title="class or interface in java.lang">@Deprecated
-public staticbooleanmatchingRow(Cellleft,
+public staticbooleanmatchingRow(Cellleft,
   Cellright)
 Deprecated.As of release 2.0.0, this will be removed in HBase 
3.0.0.
  Instead use matchingRows(Cell,
 Cell)
@@ -1376,7 +1376,7 @@ public staticboolean
 
 matchingRow
-public staticbooleanmatchingRow(Cellleft,
+public staticbooleanmatchingRow(Cellleft,
   byte[]buf)
 
 
@@ -1386,7 +1386,7 @@ public staticboolean
 
 matchingRow
-public staticbooleanmatchingRow(Cellleft,
+public staticbooleanmatchingRow(Cellleft,
   byte[]buf,
   intoffset,
   intlength)
@@ -1398,7 +1398,7 @@ public staticboolean
 
 matchingFamily
-public staticbooleanmatchingFamily(Cellleft,
+public staticbooleanmatchingFamily(Cellleft,
  Cellright)
 
 
@@ -1408,7 +1408,7 @@ public staticboolean
 
 matchingFamily
-public staticbooleanmatchingFamily(Cellleft,
+public staticbooleanmatchingFamily(Cellleft,
  byte[]buf)
 
 
@@ -1418,7 +1418,7 @@ public staticboolean
 
 matchingFamily
-public staticbooleanmatchingFamily(Cellleft,
+public staticbooleanmatchingFamily(Cellleft,
  byte[]buf,
   

[39/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/Result.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/Result.html 
b/apidocs/org/apache/hadoop/hbase/client/Result.html
index f3966fc..86a7536 100644
--- a/apidocs/org/apache/hadoop/hbase/client/Result.html
+++ b/apidocs/org/apache/hadoop/hbase/client/Result.html
@@ -115,7 +115,7 @@ var activeTableTab = "activeTableTab";
 
 @InterfaceAudience.Public
  @InterfaceStability.Stable
-public class Result
+public class Result
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellScanner
 Single row result of a Get or Scan query.
@@ -342,11 +342,11 @@ implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellSc
 create(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells,
   http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Booleanexists,
   booleanstale,
-  booleanpartial)
+  booleanmayHaveMoreCellsInRow)
 
 
 static Result
-createCompleteResult(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListResultpartialResults)
+createCompleteResult(http://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html?is-external=true;
 title="class or interface in java.lang">IterableResultpartialResults)
 Forms a single result from the partial results in the 
partialResults list.
 
 
@@ -493,7 +493,8 @@ implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellSc
 
 boolean
 mayHaveMoreCellsInRow()
-For scanning large rows, the RS may choose to return the 
cells chunk by chunk to prevent OOM.
+For scanning large rows, the RS may choose to return the 
cells chunk by chunk to prevent OOM
+ or timeout.
 
 
 
@@ -548,7 +549,7 @@ implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellSc
 
 
 EMPTY_RESULT
-public static finalResult EMPTY_RESULT
+public static finalResult EMPTY_RESULT
 
 
 
@@ -565,7 +566,7 @@ implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellSc
 
 
 Result
-publicResult()
+publicResult()
 Creates an empty Result w/ no KeyValue payload; returns 
null if you call rawCells().
  Use this to represent no results if null won't do or in old 
'mapred' as opposed
  to 'mapreduce' package MapReduce where you need to overwrite a Result 
instance with a
@@ -586,7 +587,7 @@ implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellSc
 
 
 create
-public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells)
+public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells)
 Instantiate a Result with the specified List of KeyValues.
  Note: You must ensure that the keyvalues are already 
sorted.
 
@@ -601,7 +602,7 @@ implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellSc
 
 
 create
-public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells,
+public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells,
 http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Booleanexists)
 
 
@@ -611,7 +612,7 @@ implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellSc
 
 
 create
-public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells,
+public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells,
 http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Booleanexists,
 booleanstale)
 
@@ -622,10 +623,10 @@ implements org.apache.hadoop.hbase.CellScannable, 
org.apache.hadoop.hbase.CellSc
 
 
 create
-public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells,
+public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in 

[30/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/overview-tree.html
--
diff --git a/apidocs/overview-tree.html b/apidocs/overview-tree.html
index 9e87724..332eb81 100644
--- a/apidocs/overview-tree.html
+++ b/apidocs/overview-tree.html
@@ -483,6 +483,7 @@
 org.apache.hadoop.hbase.ScheduledChore 
(implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.html?is-external=true;
 title="class or interface in java.lang">Runnable)
 org.apache.hadoop.hbase.ServerLoad
 org.apache.hadoop.hbase.ServerName 
(implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
+org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
 org.apache.hadoop.hbase.client.SnapshotDescription
 org.apache.hadoop.hbase.types.Struct 
(implements org.apache.hadoop.hbase.types.DataTypeT)
 org.apache.hadoop.hbase.types.StructBuilder
@@ -856,6 +857,8 @@
 org.apache.hadoop.hbase.client.RawAsyncTable.CoprocessorCallableS,R
 org.apache.hadoop.hbase.client.RawAsyncTable.CoprocessorCallbackR
 org.apache.hadoop.hbase.client.RawScanResultConsumer
+org.apache.hadoop.hbase.client.RawScanResultConsumer.ScanController
+org.apache.hadoop.hbase.client.RawScanResultConsumer.ScanResumer
 org.apache.hadoop.hbase.client.RequestController
 org.apache.hadoop.hbase.client.RequestController.Checker
 com.google.protobuf.RpcChannel
@@ -874,32 +877,32 @@
 
 java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true;
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.KeepDeletedCells
 org.apache.hadoop.hbase.MemoryCompactionPolicy
+org.apache.hadoop.hbase.KeepDeletedCells
 org.apache.hadoop.hbase.ProcedureState
-org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType
-org.apache.hadoop.hbase.filter.Filter.ReturnCode
 org.apache.hadoop.hbase.filter.BitComparator.BitwiseOp
+org.apache.hadoop.hbase.filter.Filter.ReturnCode
 org.apache.hadoop.hbase.filter.FilterList.Operator
+org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType
 org.apache.hadoop.hbase.filter.CompareFilter.CompareOp
 org.apache.hadoop.hbase.util.Order
 org.apache.hadoop.hbase.io.encoding.DataBlockEncoding
+org.apache.hadoop.hbase.client.SnapshotType
+org.apache.hadoop.hbase.client.Scan.ReadType
+org.apache.hadoop.hbase.client.MasterSwitchType
 org.apache.hadoop.hbase.client.CompactType
 org.apache.hadoop.hbase.client.CompactionState
-org.apache.hadoop.hbase.client.Scan.ReadType
-org.apache.hadoop.hbase.client.IsolationLevel
+org.apache.hadoop.hbase.client.RequestController.ReturnCode
 org.apache.hadoop.hbase.client.Durability
-org.apache.hadoop.hbase.client.MobCompactPartitionPolicy
-org.apache.hadoop.hbase.client.SnapshotType
 org.apache.hadoop.hbase.client.Consistency
-org.apache.hadoop.hbase.client.RequestController.ReturnCode
-org.apache.hadoop.hbase.client.MasterSwitchType
+org.apache.hadoop.hbase.client.MobCompactPartitionPolicy
+org.apache.hadoop.hbase.client.IsolationLevel
 org.apache.hadoop.hbase.client.security.SecurityCapability
-org.apache.hadoop.hbase.quotas.QuotaType
+org.apache.hadoop.hbase.regionserver.BloomType
 org.apache.hadoop.hbase.quotas.ThrottlingException.Type
-org.apache.hadoop.hbase.quotas.QuotaScope
 org.apache.hadoop.hbase.quotas.ThrottleType
-org.apache.hadoop.hbase.regionserver.BloomType
+org.apache.hadoop.hbase.quotas.QuotaType
+org.apache.hadoop.hbase.quotas.QuotaScope
 
 
 



[18/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html
index c248583..cf497d3 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html
@@ -151,779 +151,779 @@
 143  private long maxResultSize = -1;
 144  private boolean cacheBlocks = true;
 145  private boolean reversed = false;
-146  private Mapbyte[], 
NavigableSetbyte[] familyMap =
-147  new TreeMapbyte[], 
NavigableSetbyte[](Bytes.BYTES_COMPARATOR);
-148  private Boolean asyncPrefetch = null;
-149
-150  /**
-151   * Parameter name for client scanner 
sync/async prefetch toggle.
-152   * When using async scanner, 
prefetching data from the server is done at the background.
-153   * The parameter currently won't have 
any effect in the case that the user has set
-154   * Scan#setSmall or Scan#setReversed
-155   */
-156  public static final String 
HBASE_CLIENT_SCANNER_ASYNC_PREFETCH =
-157  
"hbase.client.scanner.async.prefetch";
-158
-159  /**
-160   * Default value of {@link 
#HBASE_CLIENT_SCANNER_ASYNC_PREFETCH}.
-161   */
-162  public static final boolean 
DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH = false;
-163
-164  /**
-165   * Set it true for small scan to get 
better performance Small scan should use pread and big scan
-166   * can use seek + read seek + read is 
fast but can cause two problem (1) resource contention (2)
-167   * cause too much network io [89-fb] 
Using pread for non-compaction read request
-168   * 
https://issues.apache.org/jira/browse/HBASE-7266 On the other hand, if setting 
it true, we
-169   * would do 
openScanner,next,closeScanner in one RPC call. It means the better performance 
for
-170   * small scan. [HBASE-9488]. Generally, 
if the scan range is within one data block(64KB), it could
-171   * be considered as a small scan.
-172   */
-173  private boolean small = false;
-174
-175  /**
-176   * The mvcc read point to use when open 
a scanner. Remember to clear it after switching regions as
-177   * the mvcc is only valid within region 
scope.
-178   */
-179  private long mvccReadPoint = -1L;
-180
-181  /**
-182   * The number of rows we want for this 
scan. We will terminate the scan if the number of return
-183   * rows reaches this value.
-184   */
-185  private int limit = -1;
-186
-187  /**
-188   * Control whether to use pread at 
server side.
-189   */
-190  private ReadType readType = 
ReadType.DEFAULT;
-191  /**
-192   * Create a Scan operation across all 
rows.
-193   */
-194  public Scan() {}
-195
-196  /**
-197   * @deprecated use {@code new 
Scan().withStartRow(startRow).setFilter(filter)} instead.
-198   */
-199  @Deprecated
-200  public Scan(byte[] startRow, Filter 
filter) {
-201this(startRow);
-202this.filter = filter;
-203  }
-204
-205  /**
-206   * Create a Scan operation starting at 
the specified row.
-207   * p
-208   * If the specified row does not exist, 
the Scanner will start from the next closest row after the
-209   * specified row.
-210   * @param startRow row to start scanner 
at or after
-211   * @deprecated use {@code new 
Scan().withStartRow(startRow)} instead.
-212   */
-213  @Deprecated
-214  public Scan(byte[] startRow) {
-215setStartRow(startRow);
-216  }
-217
-218  /**
-219   * Create a Scan operation for the 
range of rows specified.
-220   * @param startRow row to start scanner 
at or after (inclusive)
-221   * @param stopRow row to stop scanner 
before (exclusive)
-222   * @deprecated use {@code new 
Scan().withStartRow(startRow).withStopRow(stopRow)} instead.
-223   */
-224  @Deprecated
-225  public Scan(byte[] startRow, byte[] 
stopRow) {
-226setStartRow(startRow);
-227setStopRow(stopRow);
-228  }
-229
-230  /**
-231   * Creates a new instance of this class 
while copying all values.
-232   *
-233   * @param scan  The scan instance to 
copy from.
-234   * @throws IOException When copying the 
values fails.
-235   */
-236  public Scan(Scan scan) throws 
IOException {
-237startRow = scan.getStartRow();
-238includeStartRow = 
scan.includeStartRow();
-239stopRow  = scan.getStopRow();
-240includeStopRow = 
scan.includeStopRow();
-241maxVersions = 
scan.getMaxVersions();
-242batch = scan.getBatch();
-243storeLimit = 
scan.getMaxResultsPerColumnFamily();
-244storeOffset = 
scan.getRowOffsetPerColumnFamily();
-245caching = scan.getCaching();
-246maxResultSize = 
scan.getMaxResultSize();
-247cacheBlocks = 
scan.getCacheBlocks();
-248filter = scan.getFilter(); // 
clone?
-249loadColumnFamiliesOnDemand = 
scan.getLoadColumnFamiliesOnDemandValue();
-250consistency = 
scan.getConsistency();
-251
this.setIsolationLevel(scan.getIsolationLevel());
-252reversed 

[26/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html 
b/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html
index 528a7e6..8553b81 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html
@@ -69,393 +69,390 @@
 061@InterfaceStability.Evolving
 062public class LocalHBaseCluster {
 063  private static final Log LOG = 
LogFactory.getLog(LocalHBaseCluster.class);
-064  private final 
ListJVMClusterUtil.MasterThread masterThreads =
-065new 
CopyOnWriteArrayListJVMClusterUtil.MasterThread();
-066  private final 
ListJVMClusterUtil.RegionServerThread regionThreads =
-067new 
CopyOnWriteArrayListJVMClusterUtil.RegionServerThread();
-068  private final static int DEFAULT_NO = 
1;
-069  /** local mode */
-070  public static final String LOCAL = 
"local";
-071  /** 'local:' */
-072  public static final String LOCAL_COLON 
= LOCAL + ":";
-073  private final Configuration conf;
-074  private final Class? extends 
HMaster masterClass;
-075  private final Class? extends 
HRegionServer regionServerClass;
-076
-077  /**
-078   * Constructor.
-079   * @param conf
-080   * @throws IOException
-081   */
-082  public LocalHBaseCluster(final 
Configuration conf)
-083  throws IOException {
-084this(conf, DEFAULT_NO);
-085  }
-086
-087  /**
-088   * Constructor.
-089   * @param conf Configuration to use.  
Post construction has the master's
-090   * address.
-091   * @param noRegionServers Count of 
regionservers to start.
-092   * @throws IOException
-093   */
-094  public LocalHBaseCluster(final 
Configuration conf, final int noRegionServers)
-095  throws IOException {
-096this(conf, 1, noRegionServers, 
getMasterImplementation(conf),
-097
getRegionServerImplementation(conf));
-098  }
-099
-100  /**
-101   * Constructor.
-102   * @param conf Configuration to use.  
Post construction has the active master
-103   * address.
-104   * @param noMasters Count of masters to 
start.
-105   * @param noRegionServers Count of 
regionservers to start.
-106   * @throws IOException
-107   */
-108  public LocalHBaseCluster(final 
Configuration conf, final int noMasters,
-109  final int noRegionServers)
-110  throws IOException {
-111this(conf, noMasters, 
noRegionServers, getMasterImplementation(conf),
-112
getRegionServerImplementation(conf));
-113  }
-114
-115  @SuppressWarnings("unchecked")
-116  private static Class? extends 
HRegionServer getRegionServerImplementation(final Configuration conf) {
-117return (Class? extends 
HRegionServer)conf.getClass(HConstants.REGION_SERVER_IMPL,
-118   HRegionServer.class);
-119  }
-120
-121  @SuppressWarnings("unchecked")
-122  private static Class? extends 
HMaster getMasterImplementation(final Configuration conf) {
-123return (Class? extends 
HMaster)conf.getClass(HConstants.MASTER_IMPL,
-124   HMaster.class);
-125  }
-126
-127  /**
-128   * Constructor.
-129   * @param conf Configuration to use.  
Post construction has the master's
-130   * address.
-131   * @param noMasters Count of masters to 
start.
-132   * @param noRegionServers Count of 
regionservers to start.
-133   * @param masterClass
-134   * @param regionServerClass
-135   * @throws IOException
-136   */
-137  @SuppressWarnings("unchecked")
-138  public LocalHBaseCluster(final 
Configuration conf, final int noMasters,
-139final int noRegionServers, final 
Class? extends HMaster masterClass,
-140final Class? extends 
HRegionServer regionServerClass)
-141  throws IOException {
-142this.conf = conf;
-143
-144// Always have masters and 
regionservers come up on port '0' so we don't
-145// clash over default ports.
-146conf.set(HConstants.MASTER_PORT, 
"0");
-147
conf.set(HConstants.REGIONSERVER_PORT, "0");
-148if 
(conf.getInt(HConstants.REGIONSERVER_INFO_PORT, 0) != -1) {
-149  
conf.set(HConstants.REGIONSERVER_INFO_PORT, "0");
-150}
-151
-152this.masterClass = (Class? 
extends HMaster)
-153  
conf.getClass(HConstants.MASTER_IMPL, masterClass);
-154// Start the HMasters.
-155for (int i = 0; i  noMasters; 
i++) {
-156  addMaster(new Configuration(conf), 
i);
-157}
-158// Start the HRegionServers.
-159this.regionServerClass =
-160  (Class? extends 
HRegionServer)conf.getClass(HConstants.REGION_SERVER_IMPL,
-161   regionServerClass);
-162
-163for (int i = 0; i  
noRegionServers; i++) {
-164  addRegionServer(new 
Configuration(conf), i);
-165}
-166  }
-167
-168  public 
JVMClusterUtil.RegionServerThread addRegionServer()
-169  throws IOException {
-170return addRegionServer(new 
Configuration(conf), this.regionThreads.size());
-171  }
-172
-173  

[49/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apache_hbase_reference_guide.pdf
--
diff --git a/apache_hbase_reference_guide.pdf b/apache_hbase_reference_guide.pdf
index e946682..322bb10 100644
--- a/apache_hbase_reference_guide.pdf
+++ b/apache_hbase_reference_guide.pdf
@@ -5,24 +5,24 @@
 /Author (Apache HBase Team)
 /Creator (Asciidoctor PDF 1.5.0.alpha.6, based on Prawn 1.2.1)
 /Producer (Apache HBase Team)
-/CreationDate (D:20170217144817+00'00')
-/ModDate (D:20170217144817+00'00')
+/CreationDate (D:20170321142325+00'00')
+/ModDate (D:20170321142325+00'00')
 >>
 endobj
 2 0 obj
 << /Type /Catalog
 /Pages 3 0 R
 /Names 25 0 R
-/Outlines 4044 0 R
-/PageLabels 4252 0 R
+/Outlines 4042 0 R
+/PageLabels 4250 0 R
 /PageMode /UseOutlines
 /ViewerPreferences [/FitWindow]
 >>
 endobj
 3 0 obj
 << /Type /Pages
-/Count 675
-/Kids [7 0 R 13 0 R 15 0 R 17 0 R 19 0 R 21 0 R 23 0 R 39 0 R 43 0 R 47 0 R 55 
0 R 58 0 R 60 0 R 62 0 R 66 0 R 71 0 R 74 0 R 79 0 R 81 0 R 84 0 R 86 0 R 92 0 
R 101 0 R 106 0 R 108 0 R 129 0 R 135 0 R 142 0 R 144 0 R 149 0 R 152 0 R 162 0 
R 170 0 R 181 0 R 191 0 R 195 0 R 197 0 R 201 0 R 207 0 R 209 0 R 211 0 R 213 0 
R 215 0 R 218 0 R 224 0 R 227 0 R 229 0 R 231 0 R 233 0 R 235 0 R 237 0 R 239 0 
R 243 0 R 246 0 R 249 0 R 251 0 R 253 0 R 255 0 R 257 0 R 259 0 R 261 0 R 267 0 
R 270 0 R 272 0 R 274 0 R 276 0 R 281 0 R 286 0 R 291 0 R 295 0 R 298 0 R 313 0 
R 323 0 R 329 0 R 340 0 R 350 0 R 355 0 R 357 0 R 359 0 R 370 0 R 375 0 R 379 0 
R 384 0 R 388 0 R 399 0 R 411 0 R 425 0 R 435 0 R 437 0 R 439 0 R 444 0 R 454 0 
R 467 0 R 477 0 R 481 0 R 484 0 R 488 0 R 492 0 R 495 0 R 498 0 R 500 0 R 503 0 
R 507 0 R 509 0 R 514 0 R 518 0 R 524 0 R 528 0 R 530 0 R 536 0 R 538 0 R 542 0 
R 550 0 R 552 0 R 555 0 R 559 0 R 562 0 R 565 0 R 580 0 R 587 0 R 594 0 R 605 0 
R 611 0 R 619 0 R 627 0 R 630 0 R 634 0
  R 637 0 R 648 0 R 656 0 R 662 0 R 668 0 R 672 0 R 674 0 R 688 0 R 700 0 R 706 
0 R 712 0 R 715 0 R 724 0 R 732 0 R 736 0 R 741 0 R 747 0 R 749 0 R 751 0 R 753 
0 R 761 0 R 770 0 R 774 0 R 782 0 R 790 0 R 796 0 R 800 0 R 806 0 R 810 0 R 816 
0 R 824 0 R 826 0 R 830 0 R 835 0 R 842 0 R 845 0 R 852 0 R 861 0 R 865 0 R 867 
0 R 870 0 R 874 0 R 879 0 R 882 0 R 894 0 R 898 0 R 903 0 R 911 0 R 916 0 R 920 
0 R 925 0 R 927 0 R 930 0 R 932 0 R 936 0 R 938 0 R 941 0 R 945 0 R 949 0 R 954 
0 R 959 0 R 962 0 R 964 0 R 971 0 R 975 0 R 980 0 R 993 0 R 997 0 R 1001 0 R 
1006 0 R 1008 0 R 1017 0 R 1020 0 R 1025 0 R 1028 0 R 1037 0 R 1040 0 R 1046 0 
R 1053 0 R 1056 0 R 1058 0 R 1067 0 R 1069 0 R 1071 0 R 1074 0 R 1076 0 R 1078 
0 R 1080 0 R 1082 0 R 1084 0 R 1087 0 R 1090 0 R 1095 0 R 1098 0 R 1100 0 R 
1102 0 R 1104 0 R 1109 0 R 1118 0 R 1121 0 R 1123 0 R 1125 0 R 1130 0 R 1132 0 
R 1135 0 R 1137 0 R 1139 0 R 1141 0 R 1144 0 R 1149 0 R 1155 0 R 1162 0 R 1167 
0 R 1181 0 R 1192 0 R 1196 0 R 1211 0 R 1220 0 R 
 1234 0 R 1238 0 R 1248 0 R 1261 0 R 1265 0 R 1277 0 R 1286 0 R 1293 0 R 1297 0 
R 1306 0 R 1311 0 R 1315 0 R 1321 0 R 1327 0 R 1334 0 R 1342 0 R 1344 0 R 1356 
0 R 1358 0 R 1363 0 R 1367 0 R 1372 0 R 1382 0 R 1388 0 R 1394 0 R 1396 0 R 
1398 0 R 1410 0 R 1417 0 R 1427 0 R 1432 0 R 1445 0 R 1452 0 R 1455 0 R 1464 0 
R 1473 0 R 1478 0 R 1484 0 R 1488 0 R 1491 0 R 1493 0 R 1500 0 R 1503 0 R 1510 
0 R 1514 0 R 1517 0 R 1526 0 R 1530 0 R 1533 0 R 1535 0 R 1544 0 R 1551 0 R 
1557 0 R 1562 0 R 1566 0 R 1569 0 R 1575 0 R 1580 0 R 1585 0 R 1587 0 R 1589 0 
R 1592 0 R 1594 0 R 1602 0 R 1605 0 R 1611 0 R 1618 0 R 1622 0 R 1627 0 R 1632 
0 R 1635 0 R 1637 0 R 1639 0 R 1641 0 R 1647 0 R 1657 0 R 1659 0 R 1661 0 R 
1663 0 R 1665 0 R 1668 0 R 1670 0 R 1672 0 R 1674 0 R 1677 0 R 1679 0 R 1681 0 
R 1683 0 R 1687 0 R 1691 0 R 1700 0 R 1702 0 R 1704 0 R 1706 0 R 1708 0 R 1715 
0 R 1717 0 R 1722 0 R 1724 0 R 1726 0 R 1733 0 R 1738 0 R 1742 0 R 1746 0 R 
1749 0 R 1752 0 R 1756 0 R 1758 0 R 1761 0 R 1763 0 R 1765 0 
 R 1767 0 R 1771 0 R 1773 0 R 1777 0 R 1779 0 R 1781 0 R 1783 0 R 1785 0 R 1793 
0 R 1796 0 R 1801 0 R 1803 0 R 1805 0 R 1807 0 R 1809 0 R 1817 0 R 1827 0 R 
1830 0 R 1846 0 R 1861 0 R 1865 0 R 1870 0 R 1875 0 R 1878 0 R 1883 0 R 1885 0 
R 1890 0 R 1892 0 R 1895 0 R 1897 0 R 1899 0 R 1901 0 R 1903 0 R 1907 0 R 1909 
0 R 1913 0 R 1917 0 R 1925 0 R 1931 0 R 1942 0 R 1956 0 R 1969 0 R 1987 0 R 
1991 0 R 1993 0 R 1997 0 R 2014 0 R 2022 0 R 2029 0 R 2038 0 R 2042 0 R 2052 0 
R 2063 0 R 2069 0 R 2078 0 R 2091 0 R 2108 0 R 2120 0 R 2123 0 R 2132 0 R 2147 
0 R 2154 0 R 2157 0 R 2162 0 R 2167 0 R 2177 0 R 2185 0 R 2188 0 R 2190 0 R 
2194 0 R 2209 0 R 2218 0 R 2223 0 R 2227 0 R 2230 0 R 2232 0 R 2234 0 R 2236 0 
R 2238 0 R 2243 0 R 2245 0 R 2255 0 R 2265 0 R 2272 0 R 2284 0 R 2289 0 R 2293 
0 R 2306 0 R 2313 0 R 2319 0 R 2321 0 R 2331 0 R 2338 0 R 2349 0 R 2353 0 R 
2362 0 R 2368 0 R 2378 0 R 2387 0 R 2395 0 R 2401 0 R 2406 0 R 2410 0 R 2413 0 
R 2415 0 R 2421 0 R 2425 0 R 2429 0 R 2435 0 R 2442 0 R 2447 
 0 R 2451 0 R 2460 0 R 2465 0 R 2470 0 R 

[37/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html
index 64ed51a..76d7733 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html
@@ -227,58 +227,58 @@
 
 
 
-default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
-AsyncTableBase.checkAndDelete(byte[]row,
+boolean
+Table.checkAndDelete(byte[]row,
   byte[]family,
   byte[]qualifier,
   byte[]value,
   Deletedelete)
-Atomically checks if a row/family/qualifier value equals to 
the expected value.
+Atomically checks if a row/family/qualifier value matches 
the expected
+ value.
 
 
 
-boolean
-Table.checkAndDelete(byte[]row,
+default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
+AsyncTableBase.checkAndDelete(byte[]row,
   byte[]family,
   byte[]qualifier,
   byte[]value,
   Deletedelete)
-Atomically checks if a row/family/qualifier value matches 
the expected
- value.
+Atomically checks if a row/family/qualifier value equals to 
the expected value.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
-AsyncTableBase.checkAndDelete(byte[]row,
+boolean
+Table.checkAndDelete(byte[]row,
   byte[]family,
   byte[]qualifier,
   CompareFilter.CompareOpcompareOp,
   byte[]value,
   Deletedelete)
-Atomically checks if a row/family/qualifier value matches 
the expected value.
+Atomically checks if a row/family/qualifier value matches 
the expected
+ value.
 
 
 
-boolean
-Table.checkAndDelete(byte[]row,
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
+AsyncTableBase.checkAndDelete(byte[]row,
   byte[]family,
   byte[]qualifier,
   CompareFilter.CompareOpcompareOp,
   byte[]value,
   Deletedelete)
-Atomically checks if a row/family/qualifier value matches 
the expected
- value.
+Atomically checks if a row/family/qualifier value matches 
the expected value.
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-AsyncTableBase.delete(Deletedelete)
+void
+Table.delete(Deletedelete)
 Deletes the specified cells/row.
 
 
 
-void
-Table.delete(Deletedelete)
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
+AsyncTableBase.delete(Deletedelete)
 Deletes the specified cells/row.
 
 
@@ -292,14 +292,14 @@
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">Listhttp://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-AsyncTableBase.delete(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListDeletedeletes)
+void
+Table.delete(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListDeletedeletes)
 Deletes the specified cells/rows in bulk.
 
 
 
-void

[29/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html 
b/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html
index b55dbd3..400d699 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html
@@ -630,2583 +630,2548 @@
 622}
 623
 624@Override
-625public long heapOverhead() {
-626  long overhead = ((ExtendedCell) 
this.cell).heapOverhead() + HEAP_SIZE_OVERHEAD;
-627  if (this.tags != null) {
-628overhead += ClassSize.ARRAY;
-629  }
-630  return overhead;
-631}
-632
-633@Override
-634public Cell deepClone() {
-635  Cell clonedBaseCell = 
((ExtendedCell) this.cell).deepClone();
-636  return new 
TagRewriteCell(clonedBaseCell, this.tags);
-637}
-638  }
-639
-640  @InterfaceAudience.Private
-641  private static class 
TagRewriteByteBufferCell extends ByteBufferCell implements ExtendedCell {
-642
-643protected ByteBufferCell cell;
-644protected byte[] tags;
-645private static final long 
HEAP_SIZE_OVERHEAD = ClassSize.OBJECT + 2 * ClassSize.REFERENCE;
-646
-647/**
-648 * @param cell The original 
ByteBufferCell which it rewrites
-649 * @param tags the tags bytes. The 
array suppose to contain the tags bytes alone.
-650 */
-651public 
TagRewriteByteBufferCell(ByteBufferCell cell, byte[] tags) {
-652  assert cell instanceof 
ExtendedCell;
-653  assert tags != null;
-654  this.cell = cell;
-655  this.tags = tags;
-656  // tag offset will be treated as 0 
and length this.tags.length
-657  if (this.cell instanceof 
TagRewriteByteBufferCell) {
-658// Cleaning the ref so that the 
byte[] can be GCed
-659((TagRewriteByteBufferCell) 
this.cell).tags = null;
-660  }
-661}
-662
-663@Override
-664public byte[] getRowArray() {
-665  return this.cell.getRowArray();
-666}
-667
-668@Override
-669public int getRowOffset() {
-670  return this.cell.getRowOffset();
-671}
-672
-673@Override
-674public short getRowLength() {
-675  return this.cell.getRowLength();
-676}
-677
-678@Override
-679public byte[] getFamilyArray() {
-680  return 
this.cell.getFamilyArray();
-681}
-682
-683@Override
-684public int getFamilyOffset() {
-685  return 
this.cell.getFamilyOffset();
-686}
-687
-688@Override
-689public byte getFamilyLength() {
-690  return 
this.cell.getFamilyLength();
-691}
-692
-693@Override
-694public byte[] getQualifierArray() {
-695  return 
this.cell.getQualifierArray();
-696}
-697
-698@Override
-699public int getQualifierOffset() {
-700  return 
this.cell.getQualifierOffset();
-701}
-702
-703@Override
-704public int getQualifierLength() {
-705  return 
this.cell.getQualifierLength();
-706}
-707
-708@Override
-709public long getTimestamp() {
-710  return this.cell.getTimestamp();
-711}
-712
-713@Override
-714public byte getTypeByte() {
-715  return this.cell.getTypeByte();
-716}
-717
-718@Override
-719public long getSequenceId() {
-720  return this.cell.getSequenceId();
-721}
-722
-723@Override
-724public byte[] getValueArray() {
-725  return this.cell.getValueArray();
-726}
-727
-728@Override
-729public int getValueOffset() {
-730  return 
this.cell.getValueOffset();
-731}
-732
-733@Override
-734public int getValueLength() {
-735  return 
this.cell.getValueLength();
-736}
-737
-738@Override
-739public byte[] getTagsArray() {
-740  return this.tags;
-741}
-742
-743@Override
-744public int getTagsOffset() {
-745  return 0;
+625public Cell deepClone() {
+626  Cell clonedBaseCell = 
((ExtendedCell) this.cell).deepClone();
+627  return new 
TagRewriteCell(clonedBaseCell, this.tags);
+628}
+629  }
+630
+631  @InterfaceAudience.Private
+632  private static class 
TagRewriteByteBufferCell extends ByteBufferCell implements ExtendedCell {
+633
+634protected ByteBufferCell cell;
+635protected byte[] tags;
+636private static final long 
HEAP_SIZE_OVERHEAD = ClassSize.OBJECT + 2 * ClassSize.REFERENCE;
+637
+638/**
+639 * @param cell The original 
ByteBufferCell which it rewrites
+640 * @param tags the tags bytes. The 
array suppose to contain the tags bytes alone.
+641 */
+642public 
TagRewriteByteBufferCell(ByteBufferCell cell, byte[] tags) {
+643  assert cell instanceof 
ExtendedCell;
+644  assert tags != null;
+645  this.cell = cell;
+646  this.tags = tags;
+647  // tag offset will be treated as 0 
and length this.tags.length
+648  if (this.cell instanceof 
TagRewriteByteBufferCell) {
+649   

[19/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html
index d9a4a9a..943ce05 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html
@@ -35,96 +35,102 @@
 027
 028import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
 029import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-030
-031/**
-032 * Interface for client-side scanning. Go 
to {@link Table} to obtain instances.
-033 */
-034@InterfaceAudience.Public
-035@InterfaceStability.Stable
-036public interface ResultScanner extends 
Closeable, IterableResult {
-037
-038  @Override
-039  default IteratorResult 
iterator() {
-040return new IteratorResult() 
{
-041  // The next RowResult, possibly 
pre-read
-042  Result next = null;
-043
-044  // return true if there is another 
item pending, false if there isn't.
-045  // this method is where the actual 
advancing takes place, but you need
-046  // to call next() to consume it. 
hasNext() will only advance if there
-047  // isn't a pending next().
-048  @Override
-049  public boolean hasNext() {
-050if (next != null) {
-051  return true;
-052}
-053try {
-054  return (next = 
ResultScanner.this.next()) != null;
-055} catch (IOException e) {
-056  throw new 
UncheckedIOException(e);
-057}
-058  }
-059
-060  // get the pending next item and 
advance the iterator. returns null if
-061  // there is no next item.
-062  @Override
-063  public Result next() {
-064// since hasNext() does the real 
advancing, we call this to determine
-065// if there is a next before 
proceeding.
-066if (!hasNext()) {
-067  return null;
-068}
-069
-070// if we get to here, then 
hasNext() has given us an item to return.
-071// we want to return the item and 
then null out the next pointer, so
-072// we use a temporary variable.
-073Result temp = next;
-074next = null;
-075return temp;
-076  }
-077};
-078  }
-079
-080  /**
-081   * Grab the next row's worth of values. 
The scanner will return a Result.
-082   * @return Result object if there is 
another row, null if the scanner is exhausted.
-083   * @throws IOException e
-084   */
-085  Result next() throws IOException;
-086
-087  /**
-088   * Get nbRows rows. How many RPCs are 
made is determined by the {@link Scan#setCaching(int)}
-089   * setting (or 
hbase.client.scanner.caching in hbase-site.xml).
-090   * @param nbRows number of rows to 
return
-091   * @return Between zero and nbRows 
rowResults. Scan is done if returned array is of zero-length
-092   * (We never return null).
-093   * @throws IOException
-094   */
-095  default Result[] next(int nbRows) 
throws IOException {
-096ListResult resultSets = new 
ArrayList(nbRows);
-097for (int i = 0; i  nbRows; i++) 
{
-098  Result next = next();
-099  if (next != null) {
-100resultSets.add(next);
-101  } else {
-102break;
-103  }
-104}
-105return resultSets.toArray(new 
Result[0]);
-106  }
-107
-108  /**
-109   * Closes the scanner and releases any 
resources it has allocated
-110   */
-111  @Override
-112  void close();
-113
-114  /**
-115   * Allow the client to renew the 
scanner's lease on the server.
-116   * @return true if the lease was 
successfully renewed, false otherwise.
-117   */
-118  boolean renewLease();
-119}
+030import 
org.apache.hadoop.hbase.client.metrics.ScanMetrics;
+031
+032/**
+033 * Interface for client-side scanning. Go 
to {@link Table} to obtain instances.
+034 */
+035@InterfaceAudience.Public
+036@InterfaceStability.Stable
+037public interface ResultScanner extends 
Closeable, IterableResult {
+038
+039  @Override
+040  default IteratorResult 
iterator() {
+041return new IteratorResult() 
{
+042  // The next RowResult, possibly 
pre-read
+043  Result next = null;
+044
+045  // return true if there is another 
item pending, false if there isn't.
+046  // this method is where the actual 
advancing takes place, but you need
+047  // to call next() to consume it. 
hasNext() will only advance if there
+048  // isn't a pending next().
+049  @Override
+050  public boolean hasNext() {
+051if (next != null) {
+052  return true;
+053}
+054try {
+055  return (next = 
ResultScanner.this.next()) != null;
+056} catch (IOException e) {
+057  throw new 
UncheckedIOException(e);
+058}
+059  }
+060
+061  // get the pending next item 

[44/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html 
b/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html
index 9fe40e4..24c8277 100644
--- a/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html
+++ b/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html
@@ -854,7 +854,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 SPLIT_POLICY
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String SPLIT_POLICY
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String SPLIT_POLICY
 
 See Also:
 Constant
 Field Values
@@ -867,7 +867,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 MAX_FILESIZE
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String MAX_FILESIZE
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String MAX_FILESIZE
 INTERNAL Used by HBase Shell interface to access 
this metadata
  attribute which denotes the maximum size of the store file after which
  a region split occurs
@@ -884,7 +884,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 OWNER
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String OWNER
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String OWNER
 
 See Also:
 Constant
 Field Values
@@ -897,7 +897,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 OWNER_KEY
-public static finalBytes OWNER_KEY
+public static finalBytes OWNER_KEY
 
 
 
@@ -906,7 +906,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 READONLY
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String READONLY
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String READONLY
 INTERNAL Used by rest interface to access this 
metadata
  attribute which denotes if the table is Read Only
 
@@ -922,7 +922,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 COMPACTION_ENABLED
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String COMPACTION_ENABLED
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String COMPACTION_ENABLED
 INTERNAL Used by HBase Shell interface to access 
this metadata
  attribute which denotes if the table is compaction enabled
 
@@ -938,7 +938,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 MEMSTORE_FLUSHSIZE
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String MEMSTORE_FLUSHSIZE
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String MEMSTORE_FLUSHSIZE
 INTERNAL Used by HBase Shell interface to access 
this metadata
  attribute which represents the maximum size of the memstore after which
  its contents are flushed onto the disk
@@ -955,7 +955,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 FLUSH_POLICY
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String FLUSH_POLICY
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String FLUSH_POLICY
 
 See Also:
 Constant
 Field Values
@@ -968,7 +968,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 IS_ROOT
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String IS_ROOT
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String IS_ROOT
 INTERNAL Used by rest interface to access this 
metadata
  attribute which denotes if the table is a -ROOT- region or not
 
@@ -984,7 +984,7 @@ implements 

[42/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/class-use/Cell.html 
b/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
index 4f61fb9..fa5c933 100644
--- a/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
+++ b/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
@@ -1182,25 +1182,25 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 Result.create(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellcells,
   http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Booleanexists,
   booleanstale,
-  booleanpartial)
+  booleanmayHaveMoreCellsInRow)
 
 
-Append
-Append.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+Delete
+Delete.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 
 
-Mutation
-Mutation.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
-Method for setting the put's familyMap
-
+Append
+Append.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 
 
 Put
 Put.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 
 
-Delete
-Delete.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+Mutation
+Mutation.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+Method for setting the put's familyMap
+
 
 
 Increment
@@ -1222,66 +1222,66 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 
 Cell
-ColumnPrefixFilter.getNextCellHint(Cellcell)
+ColumnPaginationFilter.getNextCellHint(Cellcell)
 
 
 Cell
-MultipleColumnPrefixFilter.getNextCellHint(Cellcell)
+ColumnRangeFilter.getNextCellHint(Cellcell)
 
 
+Cell
+FuzzyRowFilter.getNextCellHint(CellcurrentCell)
+
+
 abstract Cell
 Filter.getNextCellHint(CellcurrentCell)
 If the filter returns the match code SEEK_NEXT_USING_HINT, 
then it should also tell which is
  the next key it must seek to.
 
 
-
-Cell
-FilterList.getNextCellHint(CellcurrentCell)
-
 
 Cell
-MultiRowRangeFilter.getNextCellHint(CellcurrentKV)
+FilterList.getNextCellHint(CellcurrentCell)
 
 
 Cell
-FuzzyRowFilter.getNextCellHint(CellcurrentCell)
+MultipleColumnPrefixFilter.getNextCellHint(Cellcell)
 
 
 Cell
-ColumnRangeFilter.getNextCellHint(Cellcell)
+TimestampsFilter.getNextCellHint(CellcurrentCell)
+Pick the next cell that the scanner should seek to.
+
 
 
 Cell
-ColumnPaginationFilter.getNextCellHint(Cellcell)
+ColumnPrefixFilter.getNextCellHint(Cellcell)
 
 
 Cell
-TimestampsFilter.getNextCellHint(CellcurrentCell)
-Pick the next cell that the scanner should seek to.
-
+MultiRowRangeFilter.getNextCellHint(CellcurrentKV)
 
 
-Cell
-KeyOnlyFilter.transformCell(Cellcell)
+abstract Cell
+Filter.transformCell(Cellv)
+Give the filter a chance to transform the passed 
KeyValue.
+
 
 
 Cell
-WhileMatchFilter.transformCell(Cellv)
+FilterList.transformCell(Cellc)
 
 
 Cell
-SkipFilter.transformCell(Cellv)
+WhileMatchFilter.transformCell(Cellv)
 
 
-abstract Cell
-Filter.transformCell(Cellv)
-Give the filter a chance to transform the passed 
KeyValue.
-
+Cell
+KeyOnlyFilter.transformCell(Cellcell)
 
 
 Cell

[45/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html 
b/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html
index 5af1bed..3c146ea 100644
--- a/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html
+++ b/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html
@@ -1033,10 +1033,13 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 BLOCKSIZE
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String BLOCKSIZE
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String BLOCKSIZE
 Size of storefile/hfile 'blocks'.  Default is DEFAULT_BLOCKSIZE.
  Use smaller block sizes for faster random-access at expense of larger
- indices (more memory consumption).
+ indices (more memory consumption). Note that this is a soft limit and that
+ blocks have overhead (metadata, CRCs) so blocks will tend to be the size
+ specified here and then some; i.e. don't expect that setting BLOCKSIZE=4k
+ means hbase data will align with an SSDs 4k page accesses (TODO).
 
 See Also:
 Constant
 Field Values
@@ -1049,7 +1052,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 LENGTH
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String LENGTH
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String LENGTH
 
 See Also:
 Constant
 Field Values
@@ -1062,7 +1065,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 TTL
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String TTL
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String TTL
 
 See Also:
 Constant
 Field Values
@@ -1075,7 +1078,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 BLOOMFILTER
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String BLOOMFILTER
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String BLOOMFILTER
 
 See Also:
 Constant
 Field Values
@@ -1088,7 +1091,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 FOREVER
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String FOREVER
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String FOREVER
 
 See Also:
 Constant
 Field Values
@@ -1101,7 +1104,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 REPLICATION_SCOPE
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String REPLICATION_SCOPE
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String REPLICATION_SCOPE
 
 See Also:
 Constant
 Field Values
@@ -1114,7 +1117,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 REPLICATION_SCOPE_BYTES
-public static finalbyte[] REPLICATION_SCOPE_BYTES
+public static finalbyte[] REPLICATION_SCOPE_BYTES
 
 
 
@@ -1123,7 +1126,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 MIN_VERSIONS
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String MIN_VERSIONS
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String MIN_VERSIONS
 
 See Also:
 Constant
 Field Values
@@ -1136,7 +1139,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 KEEP_DELETED_CELLS
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String KEEP_DELETED_CELLS
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String KEEP_DELETED_CELLS
 Retain all cells across flushes and compactions even if 
they fall behind
  a delete tombstone. To see all retained 

[33/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html 
b/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
index ec3330a..67361c3 100644
--- a/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
+++ b/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
@@ -115,7 +115,7 @@ var activeTableTab = "activeTableTab";
 
 @InterfaceAudience.Public
  @InterfaceStability.Stable
-public class RemoteHTable
+public class RemoteHTable
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
 implements Table
 HTable interface to remote tables accessed via REST 
gateway
@@ -559,7 +559,7 @@ implements 
 
 RemoteHTable
-publicRemoteHTable(Clientclient,
+publicRemoteHTable(Clientclient,
 http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 Constructor
 
@@ -570,7 +570,7 @@ implements 
 
 RemoteHTable
-publicRemoteHTable(Clientclient,
+publicRemoteHTable(Clientclient,
 org.apache.hadoop.conf.Configurationconf,
 http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 Constructor
@@ -582,7 +582,7 @@ implements 
 
 RemoteHTable
-publicRemoteHTable(Clientclient,
+publicRemoteHTable(Clientclient,
 org.apache.hadoop.conf.Configurationconf,
 byte[]name)
 Constructor
@@ -602,7 +602,7 @@ implements 
 
 buildRowSpec
-protectedhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringbuildRowSpec(byte[]row,
+protectedhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringbuildRowSpec(byte[]row,
   http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">MapfamilyMap,
   longstartTime,
   longendTime,
@@ -615,7 +615,7 @@ implements 
 
 buildMultiRowSpec
-protectedhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringbuildMultiRowSpec(byte[][]rows,
+protectedhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringbuildMultiRowSpec(byte[][]rows,
intmaxVersions)
 
 
@@ -625,7 +625,7 @@ implements 
 
 buildResultFromModel
-protectedResult[]buildResultFromModel(org.apache.hadoop.hbase.rest.model.CellSetModelmodel)
+protectedResult[]buildResultFromModel(org.apache.hadoop.hbase.rest.model.CellSetModelmodel)
 
 
 
@@ -634,7 +634,7 @@ implements 
 
 buildModelFromPut
-protectedorg.apache.hadoop.hbase.rest.model.CellSetModelbuildModelFromPut(Putput)
+protectedorg.apache.hadoop.hbase.rest.model.CellSetModelbuildModelFromPut(Putput)
 
 
 
@@ -643,7 +643,7 @@ implements 
 
 getTableName
-publicbyte[]getTableName()
+publicbyte[]getTableName()
 
 
 
@@ -652,7 +652,7 @@ implements 
 
 getName
-publicTableNamegetName()
+publicTableNamegetName()
 Description copied from 
interface:Table
 Gets the fully qualified table name instance of this 
table.
 
@@ -667,7 +667,7 @@ implements 
 
 getConfiguration
-publicorg.apache.hadoop.conf.ConfigurationgetConfiguration()
+publicorg.apache.hadoop.conf.ConfigurationgetConfiguration()
 Description copied from 
interface:Table
 Returns the Configuration object used by this 
instance.
  
@@ -685,7 +685,7 @@ implements 
 
 getTableDescriptor
-publicHTableDescriptorgetTableDescriptor()
+publicHTableDescriptorgetTableDescriptor()
 throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Description copied from 
interface:Table
 Gets the table descriptor for 
this table.
@@ -703,7 +703,7 @@ implements 
 
 close
-publicvoidclose()
+publicvoidclose()
throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Description copied from 
interface:Table
 Releases any resources held or pending changes in internal 
buffers.
@@ -725,7 +725,7 @@ implements 
 
 get
-publicResultget(Getget)
+publicResultget(Getget)
throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Description copied from 
interface:Table
 Extracts certain cells from a given row.
@@ -749,7 +749,7 @@ implements 

[24/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html
index 2a44859..05a7741 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html
@@ -28,64 +28,64 @@
 020
 021
 022import java.io.IOException;
-023import java.util.ArrayList;
-024import java.util.HashMap;
-025import java.util.List;
-026import java.util.Map;
-027import java.util.NavigableSet;
-028import java.util.Set;
-029import java.util.TreeMap;
-030import java.util.TreeSet;
-031
-032import org.apache.commons.logging.Log;
-033import 
org.apache.commons.logging.LogFactory;
-034import 
org.apache.hadoop.hbase.HConstants;
-035import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-036import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-037import 
org.apache.hadoop.hbase.filter.Filter;
-038import 
org.apache.hadoop.hbase.io.TimeRange;
-039import 
org.apache.hadoop.hbase.security.access.Permission;
-040import 
org.apache.hadoop.hbase.security.visibility.Authorizations;
-041import 
org.apache.hadoop.hbase.util.Bytes;
-042
-043/**
-044 * Used to perform Get operations on a 
single row.
-045 * p
-046 * To get everything for a row, 
instantiate a Get object with the row to get.
-047 * To further narrow the scope of what to 
Get, use the methods below.
-048 * p
-049 * To get all columns from specific 
families, execute {@link #addFamily(byte[]) addFamily}
-050 * for each family to retrieve.
-051 * p
-052 * To get specific columns, execute 
{@link #addColumn(byte[], byte[]) addColumn}
-053 * for each column to retrieve.
-054 * p
-055 * To only retrieve columns within a 
specific range of version timestamps,
-056 * execute {@link #setTimeRange(long, 
long) setTimeRange}.
-057 * p
-058 * To only retrieve columns with a 
specific timestamp, execute
-059 * {@link #setTimeStamp(long) 
setTimestamp}.
-060 * p
-061 * To limit the number of versions of 
each column to be returned, execute
-062 * {@link #setMaxVersions(int) 
setMaxVersions}.
-063 * p
-064 * To add a filter, call {@link 
#setFilter(Filter) setFilter}.
-065 */
-066@InterfaceAudience.Public
-067@InterfaceStability.Stable
-068public class Get extends Query
-069  implements Row, ComparableRow 
{
-070  private static final Log LOG = 
LogFactory.getLog(Get.class);
-071
-072  private byte [] row = null;
-073  private int maxVersions = 1;
-074  private boolean cacheBlocks = true;
-075  private int storeLimit = -1;
-076  private int storeOffset = 0;
-077  private boolean checkExistenceOnly = 
false;
-078  private boolean closestRowBefore = 
false;
-079  private Mapbyte [], 
NavigableSetbyte [] familyMap =
-080new TreeMapbyte [], 
NavigableSetbyte [](Bytes.BYTES_COMPARATOR);
+023import java.nio.ByteBuffer;
+024import java.util.ArrayList;
+025import java.util.HashMap;
+026import java.util.List;
+027import java.util.Map;
+028import java.util.NavigableSet;
+029import java.util.Set;
+030import java.util.TreeMap;
+031import java.util.TreeSet;
+032
+033import org.apache.commons.logging.Log;
+034import 
org.apache.commons.logging.LogFactory;
+035import 
org.apache.hadoop.hbase.HConstants;
+036import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
+037import 
org.apache.hadoop.hbase.classification.InterfaceStability;
+038import 
org.apache.hadoop.hbase.filter.Filter;
+039import 
org.apache.hadoop.hbase.io.TimeRange;
+040import 
org.apache.hadoop.hbase.security.access.Permission;
+041import 
org.apache.hadoop.hbase.security.visibility.Authorizations;
+042import 
org.apache.hadoop.hbase.util.Bytes;
+043
+044/**
+045 * Used to perform Get operations on a 
single row.
+046 * p
+047 * To get everything for a row, 
instantiate a Get object with the row to get.
+048 * To further narrow the scope of what to 
Get, use the methods below.
+049 * p
+050 * To get all columns from specific 
families, execute {@link #addFamily(byte[]) addFamily}
+051 * for each family to retrieve.
+052 * p
+053 * To get specific columns, execute 
{@link #addColumn(byte[], byte[]) addColumn}
+054 * for each column to retrieve.
+055 * p
+056 * To only retrieve columns within a 
specific range of version timestamps,
+057 * execute {@link #setTimeRange(long, 
long) setTimeRange}.
+058 * p
+059 * To only retrieve columns with a 
specific timestamp, execute
+060 * {@link #setTimeStamp(long) 
setTimestamp}.
+061 * p
+062 * To limit the number of versions of 
each column to be returned, execute
+063 * {@link #setMaxVersions(int) 
setMaxVersions}.
+064 * p
+065 * To add a filter, call {@link 
#setFilter(Filter) setFilter}.
+066 */
+067@InterfaceAudience.Public
+068@InterfaceStability.Stable
+069public class Get extends Query
+070  implements Row, ComparableRow 
{
+071  private static 

[28/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html 
b/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html
index 568793e..63ebc7f 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html
@@ -142,8 +142,8 @@
 134}
 135
 136
scheduler.setRemoveOnCancelPolicy(true);
-137scheduledChores = new 
HashMapScheduledChore, ScheduledFuture?();
-138choresMissingStartTime = new 
HashMapScheduledChore, Boolean();
+137scheduledChores = new 
HashMap();
+138choresMissingStartTime = new 
HashMap();
 139  }
 140
 141  /**
@@ -356,7 +356,7 @@
 348  }
 349
 350  private void cancelAllChores(final 
boolean mayInterruptIfRunning) {
-351ArrayListScheduledChore 
choresToCancel = new 
ArrayListScheduledChore(scheduledChores.keySet().size());
+351ArrayListScheduledChore 
choresToCancel = new ArrayList(scheduledChores.keySet().size());
 352// Build list of chores to cancel so 
we can iterate through a set that won't change
 353// as chores are cancelled. If we 
tried to cancel each chore while iterating through
 354// keySet the results would be 
undefined because the keySet would be changing
@@ -373,7 +373,7 @@
 365   * Prints a summary of important 
details about the chore. Used for debugging purposes
 366   */
 367  private void printChoreDetails(final 
String header, ScheduledChore chore) {
-368LinkedHashMapString, String 
output = new LinkedHashMapString, String();
+368LinkedHashMapString, String 
output = new LinkedHashMap();
 369output.put(header, "");
 370output.put("Chore name: ", 
chore.getName());
 371output.put("Chore period: ", 
Integer.toString(chore.getPeriod()));
@@ -388,7 +388,7 @@
 380   * Prints a summary of important 
details about the service. Used for debugging purposes
 381   */
 382  private void 
printChoreServiceDetails(final String header) {
-383LinkedHashMapString, String 
output = new LinkedHashMapString, String();
+383LinkedHashMapString, String 
output = new LinkedHashMap();
 384output.put(header, "");
 385output.put("ChoreService 
corePoolSize: ", Integer.toString(getCorePoolSize()));
 386output.put("ChoreService 
scheduledChores: ", Integer.toString(getNumberOfScheduledChores()));

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html 
b/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html
index 3044d47..2fae44e 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html
@@ -43,17 +43,19 @@
 035  public static final String COPROC = 
"Coprocesssor";
 036  public static final String REPLICATION 
= "Replication";
 037  public static final String PHOENIX = 
"Phoenix";
-038  /**
-039   * Denotes class names that appear in 
user facing configuration files.
-040   */
-041  public static final String CONFIG = 
"Configuration";
-042
-043  /**
-044   * Denotes classes used as tools (Used 
from cmd line). Usually, the compatibility is required
-045   * for class name, and arguments.
-046   */
-047  public static final String TOOLS = 
"Tools";
-048}
+038  public static final String SPARK = 
"Spark";
+039
+040  /**
+041   * Denotes class names that appear in 
user facing configuration files.
+042   */
+043  public static final String CONFIG = 
"Configuration";
+044
+045  /**
+046   * Denotes classes used as tools (Used 
from cmd line). Usually, the compatibility is required
+047   * for class name, and arguments.
+048   */
+049  public static final String TOOLS = 
"Tools";
+050}
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html 
b/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html
index 791cb88..a881d8a 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html
@@ -111,202 +111,202 @@
 103  /**
 104   * Size of storefile/hfile 'blocks'.  
Default is {@link #DEFAULT_BLOCKSIZE}.
 105   * Use smaller block sizes for faster 
random-access at expense of larger
-106   * indices (more memory consumption).
-107   */
-108  public static final String BLOCKSIZE = 
"BLOCKSIZE";
-109
-110  public static final String LENGTH = 
"LENGTH";
-111  public static final String TTL = 
"TTL";

[13/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html 
b/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html
index 3e217b2..99a73b9 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html
@@ -541,56 +541,55 @@
 533}
 534  }
 535
-536  static final 
MapPairString,String,KeyProvider keyProviderCache =
-537  new 
ConcurrentHashMapPairString,String,KeyProvider();
-538
-539  public static KeyProvider 
getKeyProvider(Configuration conf) {
-540String providerClassName = 
conf.get(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY,
-541  
KeyStoreKeyProvider.class.getName());
-542String providerParameters = 
conf.get(HConstants.CRYPTO_KEYPROVIDER_PARAMETERS_KEY, "");
-543try {
-544  PairString,String 
providerCacheKey = new PairString,String(providerClassName,
-545providerParameters);
-546  KeyProvider provider = 
keyProviderCache.get(providerCacheKey);
-547  if (provider != null) {
-548return provider;
-549  }
-550  provider = (KeyProvider) 
ReflectionUtils.newInstance(
-551
getClassLoaderForClass(KeyProvider.class).loadClass(providerClassName),
-552conf);
-553  
provider.init(providerParameters);
-554  if (LOG.isDebugEnabled()) {
-555LOG.debug("Installed " + 
providerClassName + " into key provider cache");
-556  }
-557  
keyProviderCache.put(providerCacheKey, provider);
-558  return provider;
-559} catch (Exception e) {
-560  throw new RuntimeException(e);
-561}
-562  }
-563
-564  public static void incrementIv(byte[] 
iv) {
-565incrementIv(iv, 1);
-566  }
-567
-568  public static void incrementIv(byte[] 
iv, int v) {
-569int length = iv.length;
-570boolean carry = true;
-571// TODO: Optimize for v  1, e.g. 
16, 32
-572do {
-573  for (int i = 0; i  length; i++) 
{
-574if (carry) {
-575  iv[i] = (byte) ((iv[i] + 1) 
 0xFF);
-576  carry = 0 == iv[i];
-577} else {
-578  break;
-579}
-580  }
-581  v--;
-582} while (v  0);
-583  }
-584
-585}
+536  static final 
MapPairString,String,KeyProvider keyProviderCache = new 
ConcurrentHashMap();
+537
+538  public static KeyProvider 
getKeyProvider(Configuration conf) {
+539String providerClassName = 
conf.get(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY,
+540  
KeyStoreKeyProvider.class.getName());
+541String providerParameters = 
conf.get(HConstants.CRYPTO_KEYPROVIDER_PARAMETERS_KEY, "");
+542try {
+543  PairString,String 
providerCacheKey = new Pair(providerClassName,
+544providerParameters);
+545  KeyProvider provider = 
keyProviderCache.get(providerCacheKey);
+546  if (provider != null) {
+547return provider;
+548  }
+549  provider = (KeyProvider) 
ReflectionUtils.newInstance(
+550
getClassLoaderForClass(KeyProvider.class).loadClass(providerClassName),
+551conf);
+552  
provider.init(providerParameters);
+553  if (LOG.isDebugEnabled()) {
+554LOG.debug("Installed " + 
providerClassName + " into key provider cache");
+555  }
+556  
keyProviderCache.put(providerCacheKey, provider);
+557  return provider;
+558} catch (Exception e) {
+559  throw new RuntimeException(e);
+560}
+561  }
+562
+563  public static void incrementIv(byte[] 
iv) {
+564incrementIv(iv, 1);
+565  }
+566
+567  public static void incrementIv(byte[] 
iv, int v) {
+568int length = iv.length;
+569boolean carry = true;
+570// TODO: Optimize for v  1, e.g. 
16, 32
+571do {
+572  for (int i = 0; i  length; i++) 
{
+573if (carry) {
+574  iv[i] = (byte) ((iv[i] + 1) 
 0xFF);
+575  carry = 0 == iv[i];
+576} else {
+577  break;
+578}
+579  }
+580  v--;
+581} while (v  0);
+582  }
+583
+584}
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html 
b/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html
index 3e217b2..99a73b9 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html
@@ -541,56 +541,55 @@
 533}
 534  }
 535
-536  static final 
MapPairString,String,KeyProvider keyProviderCache =
-537  new 
ConcurrentHashMapPairString,String,KeyProvider();
-538
-539  public static KeyProvider 
getKeyProvider(Configuration conf) {
-540   

[14/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html 
b/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html
index e5afc32..32c4a50 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html
@@ -124,309 +124,309 @@
 116  } else {
 117if (range.contains(rowArr, 
offset, length)) {
 118  currentReturnCode = 
ReturnCode.INCLUDE;
-119} else currentReturnCode = 
ReturnCode.SEEK_NEXT_USING_HINT;
-120  }
-121} else {
-122  currentReturnCode = 
ReturnCode.INCLUDE;
-123}
-124return false;
-125  }
-126
-127  @Override
-128  public ReturnCode filterKeyValue(Cell 
ignored) {
-129return currentReturnCode;
-130  }
-131
-132  @Override
-133  public Cell getNextCellHint(Cell 
currentKV) {
-134// skip to the next range's start 
row
-135return 
CellUtil.createFirstOnRow(range.startRow, 0,
-136(short) range.startRow.length);
-137  }
-138
-139  /**
-140   * @return The filter serialized using 
pb
-141   */
-142  public byte[] toByteArray() {
-143
FilterProtos.MultiRowRangeFilter.Builder builder = 
FilterProtos.MultiRowRangeFilter
-144.newBuilder();
-145for (RowRange range : rangeList) {
-146  if (range != null) {
-147FilterProtos.RowRange.Builder 
rangebuilder = FilterProtos.RowRange.newBuilder();
-148if (range.startRow != null)
-149  
rangebuilder.setStartRow(UnsafeByteOperations.unsafeWrap(range.startRow));
-150
rangebuilder.setStartRowInclusive(range.startRowInclusive);
-151if (range.stopRow != null)
-152  
rangebuilder.setStopRow(UnsafeByteOperations.unsafeWrap(range.stopRow));
-153
rangebuilder.setStopRowInclusive(range.stopRowInclusive);
-154range.isScan = 
Bytes.equals(range.startRow, range.stopRow) ? 1 : 0;
-155
builder.addRowRangeList(rangebuilder.build());
-156  }
-157}
-158return 
builder.build().toByteArray();
-159  }
-160
-161  /**
-162   * @param pbBytes A pb serialized 
instance
-163   * @return An instance of 
MultiRowRangeFilter
-164   * @throws 
org.apache.hadoop.hbase.exceptions.DeserializationException
-165   */
-166  public static MultiRowRangeFilter 
parseFrom(final byte[] pbBytes)
-167  throws DeserializationException {
-168FilterProtos.MultiRowRangeFilter 
proto;
-169try {
-170  proto = 
FilterProtos.MultiRowRangeFilter.parseFrom(pbBytes);
-171} catch 
(InvalidProtocolBufferException e) {
-172  throw new 
DeserializationException(e);
-173}
-174int length = 
proto.getRowRangeListCount();
-175ListFilterProtos.RowRange 
rangeProtos = proto.getRowRangeListList();
-176ListRowRange rangeList = new 
ArrayListRowRange(length);
-177for (FilterProtos.RowRange rangeProto 
: rangeProtos) {
-178  RowRange range = new 
RowRange(rangeProto.hasStartRow() ? rangeProto.getStartRow()
-179  .toByteArray() : null, 
rangeProto.getStartRowInclusive(), rangeProto.hasStopRow() ?
-180  
rangeProto.getStopRow().toByteArray() : null, 
rangeProto.getStopRowInclusive());
-181  rangeList.add(range);
-182}
-183return new 
MultiRowRangeFilter(rangeList);
-184  }
-185
-186  /**
-187   * @param o the filter to compare
-188   * @return true if and only if the 
fields of the filter that are serialized are equal to the
-189   * corresponding fields in 
other. Used for testing.
-190   */
-191  boolean areSerializedFieldsEqual(Filter 
o) {
-192if (o == this)
-193  return true;
-194if (!(o instanceof 
MultiRowRangeFilter))
-195  return false;
-196
-197MultiRowRangeFilter other = 
(MultiRowRangeFilter) o;
-198if (this.rangeList.size() != 
other.rangeList.size())
-199  return false;
-200for (int i = 0; i  
rangeList.size(); ++i) {
-201  RowRange thisRange = 
this.rangeList.get(i);
-202  RowRange otherRange = 
other.rangeList.get(i);
-203  if 
(!(Bytes.equals(thisRange.startRow, otherRange.startRow)  
Bytes.equals(
-204  thisRange.stopRow, 
otherRange.stopRow)  (thisRange.startRowInclusive ==
-205  otherRange.startRowInclusive) 
 (thisRange.stopRowInclusive ==
-206  otherRange.stopRowInclusive))) 
{
-207return false;
-208  }
-209}
-210return true;
-211  }
-212
-213  /**
-214   * calculate the position where the row 
key in the ranges list.
-215   *
-216   * @param rowKey the row key to 
calculate
-217   * @return index the position of the 
row key
-218   */
-219  private int getNextRangeIndex(byte[] 
rowKey) {
-220RowRange temp = new RowRange(rowKey, 
true, null, true);
-221int index = 
Collections.binarySearch(rangeList, temp);
-222  

[16/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html 
b/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html
index 3e67195..3a0b315 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html
@@ -91,7 +91,7 @@
 083  p = fuzzyKeysData.get(i);
 084  if (p.getFirst().length != 
p.getSecond().length) {
 085PairString, String 
readable =
-086new PairString, 
String(Bytes.toStringBinary(p.getFirst()), Bytes.toStringBinary(p
+086new 
Pair(Bytes.toStringBinary(p.getFirst()), Bytes.toStringBinary(p
 087.getSecond()));
 088throw new 
IllegalArgumentException("Fuzzy pair lengths do not match: " + readable);
 089  }
@@ -199,440 +199,439 @@
 191private boolean initialized = 
false;
 192
 193RowTracker() {
-194  nextRows =
-195  new 
PriorityQueuePairbyte[], Pairbyte[], 
byte[](fuzzyKeysData.size(),
-196  new 
ComparatorPairbyte[], Pairbyte[], byte[]() {
-197@Override
-198public int 
compare(Pairbyte[], Pairbyte[], byte[] o1,
-199Pairbyte[], 
Pairbyte[], byte[] o2) {
-200  return isReversed()? 
Bytes.compareTo(o2.getFirst(), o1.getFirst()):
-201
Bytes.compareTo(o1.getFirst(), o2.getFirst());
-202}
-203  });
-204}
-205
-206byte[] nextRow() {
-207  if (nextRows.isEmpty()) {
-208throw new 
IllegalStateException(
-209"NextRows should not be 
empty, make sure to call nextRow() after updateTracker() return true");
-210  } else {
-211return 
nextRows.peek().getFirst();
-212  }
-213}
-214
-215boolean updateTracker(Cell 
currentCell) {
-216  if (!initialized) {
-217for (Pairbyte[], byte[] 
fuzzyData : fuzzyKeysData) {
-218  updateWith(currentCell, 
fuzzyData);
-219}
-220initialized = true;
-221  } else {
-222while (!nextRows.isEmpty() 
 !lessThan(currentCell, nextRows.peek().getFirst())) {
-223  Pairbyte[], Pairbyte[], 
byte[] head = nextRows.poll();
-224  Pairbyte[], byte[] 
fuzzyData = head.getSecond();
-225  updateWith(currentCell, 
fuzzyData);
-226}
-227  }
-228  return !nextRows.isEmpty();
-229}
-230
-231boolean lessThan(Cell currentCell, 
byte[] nextRowKey) {
-232  int compareResult =
-233  
CellComparator.COMPARATOR.compareRows(currentCell, nextRowKey, 0, 
nextRowKey.length);
-234  return (!isReversed()  
compareResult  0) || (isReversed()  compareResult  0);
-235}
-236
-237void updateWith(Cell currentCell, 
Pairbyte[], byte[] fuzzyData) {
-238  byte[] nextRowKeyCandidate =
-239  
getNextForFuzzyRule(isReversed(), currentCell.getRowArray(), 
currentCell.getRowOffset(),
-240currentCell.getRowLength(), 
fuzzyData.getFirst(), fuzzyData.getSecond());
-241  if (nextRowKeyCandidate != null) 
{
-242nextRows.add(new Pairbyte[], 
Pairbyte[], byte[](nextRowKeyCandidate, fuzzyData));
-243  }
-244}
-245
-246  }
-247
-248  @Override
-249  public boolean filterAllRemaining() {
-250return done;
-251  }
-252
-253  /**
-254   * @return The filter serialized using 
pb
-255   */
-256  public byte[] toByteArray() {
-257FilterProtos.FuzzyRowFilter.Builder 
builder = FilterProtos.FuzzyRowFilter.newBuilder();
-258for (Pairbyte[], byte[] 
fuzzyData : fuzzyKeysData) {
-259  BytesBytesPair.Builder bbpBuilder = 
BytesBytesPair.newBuilder();
-260  
bbpBuilder.setFirst(UnsafeByteOperations.unsafeWrap(fuzzyData.getFirst()));
-261  
bbpBuilder.setSecond(UnsafeByteOperations.unsafeWrap(fuzzyData.getSecond()));
-262  
builder.addFuzzyKeysData(bbpBuilder);
-263}
-264return 
builder.build().toByteArray();
-265  }
-266
-267  /**
-268   * @param pbBytes A pb serialized 
{@link FuzzyRowFilter} instance
-269   * @return An instance of {@link 
FuzzyRowFilter} made from codebytes/code
-270   * @throws DeserializationException
-271   * @see #toByteArray
-272   */
-273  public static FuzzyRowFilter 
parseFrom(final byte[] pbBytes) throws DeserializationException {
-274FilterProtos.FuzzyRowFilter proto;
-275try {
-276  proto = 
FilterProtos.FuzzyRowFilter.parseFrom(pbBytes);
-277} catch 
(InvalidProtocolBufferException e) {
-278  throw new 
DeserializationException(e);
-279}
-280int count = 
proto.getFuzzyKeysDataCount();
-281ArrayListPairbyte[], 
byte[] fuzzyKeysData = new ArrayListPairbyte[], 
byte[](count);
-282for (int i = 0; i  count; ++i) 
{
-283  BytesBytesPair current = 
proto.getFuzzyKeysData(i);
-284  byte[] 

[22/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html
index 610fff9..c58b6a61 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html
@@ -100,484 +100,482 @@
 092  protected Durability durability = 
Durability.USE_DEFAULT;
 093
 094  // A Map sorted by column family.
-095  protected NavigableMapbyte [], 
ListCell familyMap =
-096new TreeMapbyte [], 
ListCell(Bytes.BYTES_COMPARATOR);
-097
-098  @Override
-099  public CellScanner cellScanner() {
-100return 
CellUtil.createCellScanner(getFamilyCellMap());
-101  }
-102
-103  /**
-104   * Creates an empty list if one doesn't 
exist for the given column family
-105   * or else it returns the associated 
list of Cell objects.
-106   *
-107   * @param family column family
-108   * @return a list of Cell objects, 
returns an empty list if one doesn't exist.
-109   */
-110  ListCell getCellList(byte[] 
family) {
-111ListCell list = 
this.familyMap.get(family);
-112if (list == null) {
-113  list = new 
ArrayListCell();
-114}
-115return list;
-116  }
-117
-118  /*
-119   * Create a KeyValue with this objects 
row key and the Put identifier.
-120   *
-121   * @return a KeyValue with this objects 
row key and the Put identifier.
-122   */
-123  KeyValue createPutKeyValue(byte[] 
family, byte[] qualifier, long ts, byte[] value) {
-124return new KeyValue(this.row, family, 
qualifier, ts, KeyValue.Type.Put, value);
-125  }
-126
-127  /**
-128   * Create a KeyValue with this objects 
row key and the Put identifier.
-129   * @param family
-130   * @param qualifier
-131   * @param ts
-132   * @param value
-133   * @param tags - Specify the Tags as an 
Array
-134   * @return a KeyValue with this objects 
row key and the Put identifier.
-135   */
-136  KeyValue createPutKeyValue(byte[] 
family, byte[] qualifier, long ts, byte[] value, Tag[] tags) {
-137KeyValue kvWithTag = new 
KeyValue(this.row, family, qualifier, ts, value, tags);
-138return kvWithTag;
-139  }
-140
-141  /*
-142   * Create a KeyValue with this objects 
row key and the Put identifier.
-143   *
-144   * @return a KeyValue with this objects 
row key and the Put identifier.
-145   */
-146  KeyValue createPutKeyValue(byte[] 
family, ByteBuffer qualifier, long ts, ByteBuffer value,
-147  Tag[] tags) {
-148return new KeyValue(this.row, 0, 
this.row == null ? 0 : this.row.length,
-149family, 0, family == null ? 0 : 
family.length,
-150qualifier, ts, KeyValue.Type.Put, 
value, tags != null ? Arrays.asList(tags) : null);
-151  }
-152
-153  /**
-154   * Compile the column family (i.e. 
schema) information
-155   * into a Map. Useful for parsing and 
aggregation by debugging,
-156   * logging, and administration tools.
-157   * @return Map
-158   */
-159  @Override
-160  public MapString, Object 
getFingerprint() {
-161MapString, Object map = new 
HashMapString, Object();
-162ListString families = new 
ArrayListString(this.familyMap.entrySet().size());
-163// ideally, we would also include 
table information, but that information
-164// is not stored in each Operation 
instance.
-165map.put("families", families);
-166for (Map.Entrybyte [], 
ListCell entry : this.familyMap.entrySet()) {
-167  
families.add(Bytes.toStringBinary(entry.getKey()));
-168}
-169return map;
-170  }
-171
-172  /**
-173   * Compile the details beyond the scope 
of getFingerprint (row, columns,
-174   * timestamps, etc.) into a Map along 
with the fingerprinted information.
-175   * Useful for debugging, logging, and 
administration tools.
-176   * @param maxCols a limit on the number 
of columns output prior to truncation
-177   * @return Map
-178   */
-179  @Override
-180  public MapString, Object 
toMap(int maxCols) {
-181// we start with the fingerprint map 
and build on top of it.
-182MapString, Object map = 
getFingerprint();
-183// replace the fingerprint's simple 
list of families with a
-184// map from column families to lists 
of qualifiers and kv details
-185MapString, ListMapString, 
Object columns =
-186  new HashMapString, 
ListMapString, Object();
-187map.put("families", columns);
-188map.put("row", 
Bytes.toStringBinary(this.row));
-189int colCount = 0;
-190// iterate through all column 
families affected
-191for (Map.Entrybyte [], 
ListCell entry : this.familyMap.entrySet()) {
-192  // map from this family to details 
for each cell affected within the family
-193  ListMapString, 
Object qualifierDetails = new ArrayListMapString, 
Object();
-194  
columns.put(Bytes.toStringBinary(entry.getKey()), qualifierDetails);

[03/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html 
b/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html
index 6552d0b..bf243f9 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html
@@ -572,7 +572,7 @@
 564  final FileSystem fs, final Path 
snapshotDir) throws IOException {
 565SnapshotDescription snapshotDesc = 
SnapshotDescriptionUtils.readSnapshotInfo(fs, snapshotDir);
 566
-567final 
ListPairSnapshotFileInfo, Long files = new 
ArrayListPairSnapshotFileInfo, Long();
+567final 
ListPairSnapshotFileInfo, Long files = new 
ArrayList();
 568final TableName table = 
TableName.valueOf(snapshotDesc.getTable());
 569
 570// Get snapshot files
@@ -599,7 +599,7 @@
 591} else {
 592  size = 
HFileLink.buildFromHFileLinkPattern(conf, path).getFileStatus(fs).getLen();
 593}
-594files.add(new 
PairSnapshotFileInfo, Long(fileInfo, size));
+594files.add(new 
Pair(fileInfo, size));
 595  }
 596}
 597});
@@ -626,504 +626,503 @@
 618});
 619
 620// create balanced groups
-621
ListListPairSnapshotFileInfo, Long fileGroups =
-622  new 
LinkedListListPairSnapshotFileInfo, Long();
-623long[] sizeGroups = new 
long[ngroups];
-624int hi = files.size() - 1;
-625int lo = 0;
-626
-627ListPairSnapshotFileInfo, 
Long group;
-628int dir = 1;
-629int g = 0;
-630
-631while (hi = lo) {
-632  if (g == fileGroups.size()) {
-633group = new 
LinkedListPairSnapshotFileInfo, Long();
-634fileGroups.add(group);
-635  } else {
-636group = fileGroups.get(g);
-637  }
-638
-639  PairSnapshotFileInfo, Long 
fileInfo = files.get(hi--);
-640
-641  // add the hi one
-642  sizeGroups[g] += 
fileInfo.getSecond();
-643  group.add(fileInfo);
-644
-645  // change direction when at the end 
or the beginning
-646  g += dir;
-647  if (g == ngroups) {
-648dir = -1;
-649g = ngroups - 1;
-650  } else if (g  0) {
-651dir = 1;
-652g = 0;
-653  }
-654}
-655
-656if (LOG.isDebugEnabled()) {
-657  for (int i = 0; i  
sizeGroups.length; ++i) {
-658LOG.debug("export split=" + i + " 
size=" + StringUtils.humanReadableInt(sizeGroups[i]));
-659  }
-660}
-661
-662return fileGroups;
-663  }
-664
-665  private static class 
ExportSnapshotInputFormat extends InputFormatBytesWritable, 
NullWritable {
-666@Override
-667public RecordReaderBytesWritable, 
NullWritable createRecordReader(InputSplit split,
-668TaskAttemptContext tac) throws 
IOException, InterruptedException {
-669  return new 
ExportSnapshotRecordReader(((ExportSnapshotInputSplit)split).getSplitKeys());
-670}
-671
-672@Override
-673public ListInputSplit 
getSplits(JobContext context) throws IOException, InterruptedException {
-674  Configuration conf = 
context.getConfiguration();
-675  Path snapshotDir = new 
Path(conf.get(CONF_SNAPSHOT_DIR));
-676  FileSystem fs = 
FileSystem.get(snapshotDir.toUri(), conf);
-677
-678  ListPairSnapshotFileInfo, 
Long snapshotFiles = getSnapshotFiles(conf, fs, snapshotDir);
-679  int mappers = 
conf.getInt(CONF_NUM_SPLITS, 0);
-680  if (mappers == 0  
snapshotFiles.size()  0) {
-681mappers = 1 + 
(snapshotFiles.size() / conf.getInt(CONF_MAP_GROUP, 10));
-682mappers = Math.min(mappers, 
snapshotFiles.size());
-683conf.setInt(CONF_NUM_SPLITS, 
mappers);
-684conf.setInt(MR_NUM_MAPS, 
mappers);
-685  }
-686
-687  
ListListPairSnapshotFileInfo, Long groups = 
getBalancedSplits(snapshotFiles, mappers);
-688  ListInputSplit splits = new 
ArrayList(groups.size());
-689  for 
(ListPairSnapshotFileInfo, Long files: groups) {
-690splits.add(new 
ExportSnapshotInputSplit(files));
-691  }
-692  return splits;
-693}
-694
-695private static class 
ExportSnapshotInputSplit extends InputSplit implements Writable {
-696  private 
ListPairBytesWritable, Long files;
-697  private long length;
-698
-699  public ExportSnapshotInputSplit() 
{
-700this.files = null;
-701  }
-702
-703  public 
ExportSnapshotInputSplit(final ListPairSnapshotFileInfo, Long 
snapshotFiles) {
-704this.files = new 
ArrayList(snapshotFiles.size());
-705for (PairSnapshotFileInfo, 
Long fileInfo: snapshotFiles) {
-706  this.files.add(new 
PairBytesWritable, Long(
-707new 
BytesWritable(fileInfo.getFirst().toByteArray()), fileInfo.getSecond()));
-708  this.length += 
fileInfo.getSecond();
-709}

[34/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html 
b/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html
index 7f5fd06..cefe0fa 100644
--- a/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html
+++ b/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html
@@ -788,7 +788,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 getKeyProvider
-public staticKeyProvidergetKeyProvider(org.apache.hadoop.conf.Configurationconf)
+public staticKeyProvidergetKeyProvider(org.apache.hadoop.conf.Configurationconf)
 
 
 
@@ -797,7 +797,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 incrementIv
-public staticvoidincrementIv(byte[]iv)
+public staticvoidincrementIv(byte[]iv)
 
 
 
@@ -806,7 +806,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 incrementIv
-public staticvoidincrementIv(byte[]iv,
+public staticvoidincrementIv(byte[]iv,
intv)
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html 
b/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html
index d4a6bea..18086c0 100644
--- a/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html
+++ b/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html
@@ -383,7 +383,7 @@ the order they are declared.
 
 
 values
-public staticDataBlockEncoding[]values()
+public staticDataBlockEncoding[]values()
 Returns an array containing the constants of this enum 
type, in
 the order they are declared.  This method may be used to iterate
 over the constants as follows:
@@ -403,7 +403,7 @@ for (DataBlockEncoding c : DataBlockEncoding.values())
 
 
 valueOf
-public staticDataBlockEncodingvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
+public staticDataBlockEncodingvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 Returns the enum constant of this type with the specified 
name.
 The string must match exactly an identifier used to declare an
 enum constant in this type.  (Extraneous whitespace characters are 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html 
b/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html
index 16cacee..4db2750 100644
--- a/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html
+++ b/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html
@@ -363,7 +363,7 @@ extends 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
 
 configureIncrementalLoad
-public staticvoidconfigureIncrementalLoad(org.apache.hadoop.mapreduce.Jobjob,
+public staticvoidconfigureIncrementalLoad(org.apache.hadoop.mapreduce.Jobjob,
 Tabletable,
 RegionLocatorregionLocator)
  throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
@@ -391,7 +391,7 @@ extends 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
 
 configureIncrementalLoad
-public staticvoidconfigureIncrementalLoad(org.apache.hadoop.mapreduce.Jobjob,
+public staticvoidconfigureIncrementalLoad(org.apache.hadoop.mapreduce.Jobjob,
 HTableDescriptortableDescriptor,
 RegionLocatorregionLocator)
  throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
@@ -419,7 +419,7 @@ extends 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
 
 configureIncrementalLoadMap
-public staticvoidconfigureIncrementalLoadMap(org.apache.hadoop.mapreduce.Jobjob,
+public staticvoidconfigureIncrementalLoadMap(org.apache.hadoop.mapreduce.Jobjob,
HTableDescriptortableDescriptor)
 throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 


[52/52] hbase-site git commit: Empty commit

2017-03-21 Thread misty
Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/fadf6d5a
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/fadf6d5a
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/fadf6d5a

Branch: refs/heads/asf-site
Commit: fadf6d5a0b9a37f275bf7389708bd9b44a5f97bd
Parents: 22cff34
Author: Misty Stanley-Jones <mi...@apache.org>
Authored: Tue Mar 21 09:34:38 2017 -0700
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Tue Mar 21 09:34:38 2017 -0700

--

--




[32/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html 
b/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html
index b04fd0d..7a77e85 100644
--- a/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html
+++ b/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html
@@ -182,7 +182,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 testKeyProvider
-public staticvoidtestKeyProvider(org.apache.hadoop.conf.Configurationconf)
+public staticvoidtestKeyProvider(org.apache.hadoop.conf.Configurationconf)
 throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Check that the configured key provider can be loaded and 
initialized, or
  throw an exception.
@@ -200,7 +200,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 testCipherProvider
-public staticvoidtestCipherProvider(org.apache.hadoop.conf.Configurationconf)
+public staticvoidtestCipherProvider(org.apache.hadoop.conf.Configurationconf)
throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Check that the configured cipher provider can be loaded and 
initialized, or
  throw an exception.
@@ -218,7 +218,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 testEncryption
-public staticvoidtestEncryption(org.apache.hadoop.conf.Configurationconf,
+public staticvoidtestEncryption(org.apache.hadoop.conf.Configurationconf,
   http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringcipher,
   byte[]key)
throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/util/RegionMover.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/util/RegionMover.html 
b/apidocs/org/apache/hadoop/hbase/util/RegionMover.html
index e6a7ab2..4bbe058 100644
--- a/apidocs/org/apache/hadoop/hbase/util/RegionMover.html
+++ b/apidocs/org/apache/hadoop/hbase/util/RegionMover.html
@@ -176,7 +176,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 Fields inherited from 
classorg.apache.hadoop.hbase.util.AbstractHBaseTool
-cmdLineArgs, conf, EXIT_FAILURE, EXIT_SUCCESS
+cmdLineArgs, conf, EXIT_FAILURE, EXIT_SUCCESS, LONG_HELP_OPTION, 
options, SHORT_HELP_OPTION
 
 
 
@@ -237,7 +237,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 Methods inherited from 
classorg.apache.hadoop.hbase.util.AbstractHBaseTool
-addOption, addOptNoArg, addOptNoArg, addOptWithArg, addOptWithArg, 
addRequiredOption, addRequiredOptWithArg, addRequiredOptWithArg, doStaticMain, 
getConf, getOptionAsDouble, getOptionAsInt, parseInt, parseLong, printUsage, 
printUsage, processOldArgs, run, setConf
+addOption, addOptNoArg, addOptNoArg, addOptWithArg, addOptWithArg, 
addRequiredOption, addRequiredOptWithArg, addRequiredOptWithArg, doStaticMain, 
getConf, getOptionAsDouble, getOptionAsInt, parseArgs, parseInt, parseLong, 
printUsage, printUsage, processOldArgs, run, setConf
 
 
 
@@ -398,7 +398,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 addOptions
-protectedvoidaddOptions()
+protectedvoidaddOptions()
 Description copied from 
class:org.apache.hadoop.hbase.util.AbstractHBaseTool
 Override this to add command-line options using 
AbstractHBaseTool.addOptWithArg(java.lang.String, java.lang.String)
  and similar methods.
@@ -414,7 +414,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 processOptions
-protectedvoidprocessOptions(org.apache.commons.cli.CommandLinecmd)
+protectedvoidprocessOptions(org.apache.commons.cli.CommandLinecmd)
 Description copied from 
class:org.apache.hadoop.hbase.util.AbstractHBaseTool
 This method is called to process the options after they 
have been parsed.
 
@@ -429,7 +429,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 doWork
-protectedintdoWork()
+protectedintdoWork()
   throws http://docs.oracle.com/javase/8/docs/api/java/lang/Exception.html?is-external=true;
 title="class or interface in java.lang">Exception
 Description copied from 
class:org.apache.hadoop.hbase.util.AbstractHBaseTool
 The "main function" of the tool
@@ -447,7 +447,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool
 
 
 main
-public 

[06/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html
index 8803754..bff248e 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html
@@ -78,9 +78,9 @@
 070public class WALPlayer extends Configured 
implements Tool {
 071  private static final Log LOG = 
LogFactory.getLog(WALPlayer.class);
 072  final static String NAME = 
"WALPlayer";
-073  final static String 
BULK_OUTPUT_CONF_KEY = "wal.bulk.output";
-074  final static String TABLES_KEY = 
"wal.input.tables";
-075  final static String TABLE_MAP_KEY = 
"wal.input.tablesmap";
+073  public final static String 
BULK_OUTPUT_CONF_KEY = "wal.bulk.output";
+074  public final static String TABLES_KEY = 
"wal.input.tables";
+075  public final static String 
TABLE_MAP_KEY = "wal.input.tablesmap";
 076
 077  // This relies on Hadoop Configuration 
to handle warning about deprecated configs and
 078  // to set the correct non-deprecated 
configs when an old one shows up.
@@ -92,271 +92,302 @@
 084
 085  private final static String 
JOB_NAME_CONF_KEY = "mapreduce.job.name";
 086
-087  protected WALPlayer(final Configuration 
c) {
-088super(c);
-089  }
-090
-091  /**
-092   * A mapper that just writes out 
KeyValues.
-093   * This one can be used together with 
{@link KeyValueSortReducer}
-094   */
-095  static class WALKeyValueMapper
-096  extends MapperWALKey, WALEdit, 
ImmutableBytesWritable, KeyValue {
-097private byte[] table;
-098
-099@Override
-100public void map(WALKey key, WALEdit 
value,
-101  Context context)
-102throws IOException {
-103  try {
-104// skip all other tables
-105if (Bytes.equals(table, 
key.getTablename().getName())) {
-106  for (Cell cell : 
value.getCells()) {
-107KeyValue kv = 
KeyValueUtil.ensureKeyValue(cell);
-108if 
(WALEdit.isMetaEditFamily(kv)) continue;
-109context.write(new 
ImmutableBytesWritable(CellUtil.cloneRow(kv)), kv);
-110  }
-111}
-112  } catch (InterruptedException e) 
{
-113e.printStackTrace();
-114  }
-115}
-116
-117@Override
-118public void setup(Context context) 
throws IOException {
-119  // only a single table is supported 
when HFiles are generated with HFileOutputFormat
-120  String[] tables = 
context.getConfiguration().getStrings(TABLES_KEY);
-121  if (tables == null || tables.length 
!= 1) {
-122// this can only happen when 
WALMapper is used directly by a class other than WALPlayer
-123throw new IOException("Exactly 
one table must be specified for bulk HFile case.");
-124  }
-125  table = Bytes.toBytes(tables[0]);
-126}
-127  }
-128
-129  /**
-130   * A mapper that writes out {@link 
Mutation} to be directly applied to
-131   * a running HBase instance.
-132   */
-133  protected static class WALMapper
-134  extends MapperWALKey, WALEdit, 
ImmutableBytesWritable, Mutation {
-135private MapTableName, 
TableName tables = new TreeMapTableName, TableName();
-136
-137@Override
-138public void map(WALKey key, WALEdit 
value, Context context)
-139throws IOException {
-140  try {
-141if (tables.isEmpty() || 
tables.containsKey(key.getTablename())) {
-142  TableName targetTable = 
tables.isEmpty() ?
-143key.getTablename() :
-144
tables.get(key.getTablename());
-145  ImmutableBytesWritable tableOut 
= new ImmutableBytesWritable(targetTable.getName());
-146  Put put = null;
-147  Delete del = null;
-148  Cell lastCell = null;
-149  for (Cell cell : 
value.getCells()) {
-150// filtering WAL meta 
entries
-151if 
(WALEdit.isMetaEditFamily(cell)) continue;
-152
-153// Allow a subclass filter 
out this cell.
-154if (filter(context, cell)) 
{
-155  // A WALEdit may contain 
multiple operations (HBASE-3584) and/or
-156  // multiple rows 
(HBASE-5229).
-157  // Aggregate as much as 
possible into a single Put/Delete
-158  // operation before writing 
to the context.
-159  if (lastCell == null || 
lastCell.getTypeByte() != cell.getTypeByte()
-160  || 
!CellUtil.matchingRow(lastCell, cell)) {
-161// row or type changed, 
write out aggregate KVs.
-162if (put != null) 
context.write(tableOut, put);
-163if (del != null) 
context.write(tableOut, del);
-164if 
(CellUtil.isDelete(cell)) {
-165  del = new 
Delete(CellUtil.cloneRow(cell));
-166} else {
-167

[07/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html
index 099a926..ca75198 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html
@@ -459,7 +459,7 @@
 451job.setMapperClass(mapper);
 452Configuration conf = 
job.getConfiguration();
 453HBaseConfiguration.merge(conf, 
HBaseConfiguration.create(conf));
-454ListString scanStrings = new 
ArrayListString();
+454ListString scanStrings = new 
ArrayList();
 455
 456for (Scan scan : scans) {
 457  
scanStrings.add(convertScanToString(scan));
@@ -815,7 +815,7 @@
 807if (conf == null) {
 808  throw new 
IllegalArgumentException("Must provide a configuration object.");
 809}
-810SetString paths = new 
HashSetString(conf.getStringCollection("tmpjars"));
+810SetString paths = new 
HashSet(conf.getStringCollection("tmpjars"));
 811if (paths.isEmpty()) {
 812  throw new 
IllegalArgumentException("Configuration contains no tmpjars.");
 813}
@@ -887,13 +887,13 @@
 879  Class?... classes) throws 
IOException {
 880
 881FileSystem localFs = 
FileSystem.getLocal(conf);
-882SetString jars = new 
HashSetString();
+882SetString jars = new 
HashSet();
 883// Add jars that are already in the 
tmpjars variable
 884
jars.addAll(conf.getStringCollection("tmpjars"));
 885
 886// add jars as we find them to a map 
of contents jar name so that we can avoid
 887// creating new jars for classes that 
have already been packaged.
-888MapString, String 
packagedClasses = new HashMapString, String();
+888MapString, String 
packagedClasses = new HashMap();
 889
 890// Add jars containing the specified 
classes
 891for (Class? clazz : classes) 
{

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html
index 3150448..9567688 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html
@@ -89,7 +89,7 @@
 081   */
 082  public void restart(byte[] firstRow) 
throws IOException {
 083currentScan = new Scan(scan);
-084currentScan.setStartRow(firstRow);
+084currentScan.withStartRow(firstRow);
 085
currentScan.setScanMetricsEnabled(true);
 086if (this.scanner != null) {
 087  if (logScannerActivity) {
@@ -281,7 +281,7 @@
 273   * @throws IOException
 274   */
 275  private void updateCounters() throws 
IOException {
-276ScanMetrics scanMetrics = 
currentScan.getScanMetrics();
+276ScanMetrics scanMetrics = 
scanner.getScanMetrics();
 277if (scanMetrics == null) {
 278  return;
 279}

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
index 2a522e5..21a2475 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
@@ -191,7 +191,7 @@
 183
 184  @Override
 185  public ListInputSplit 
getSplits(JobContext job) throws IOException, InterruptedException {
-186ListInputSplit results = new 
ArrayListInputSplit();
+186ListInputSplit results = new 
ArrayList();
 187for 
(TableSnapshotInputFormatImpl.InputSplit split :
 188
TableSnapshotInputFormatImpl.getSplits(job.getConfiguration())) {
 189  results.add(new 
TableSnapshotRegionSplit(split));

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html
index 6ab4f9e..0c0f789 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html
@@ -154,7 +154,7 @@
 

[40/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/Increment.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/Increment.html 
b/apidocs/org/apache/hadoop/hbase/client/Increment.html
index 81f7d77..7f424f2 100644
--- a/apidocs/org/apache/hadoop/hbase/client/Increment.html
+++ b/apidocs/org/apache/hadoop/hbase/client/Increment.html
@@ -611,7 +611,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 toString
-publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtoString()
+publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringtoString()
 Description copied from 
class:Operation
 Produces a string representation of this Operation. It 
defaults to a JSON
  representation, but falls back to a string representation of the
@@ -630,7 +630,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 compareTo
-publicintcompareTo(Rowi)
+publicintcompareTo(Rowi)
 
 Specified by:
 http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true#compareTo-T-;
 title="class or interface in java.lang">compareToin 
interfacehttp://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableRow
@@ -645,7 +645,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 hashCode
-publicinthashCode()
+publicinthashCode()
 
 Overrides:
 http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#hashCode--;
 title="class or interface in java.lang">hashCodein 
classhttp://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
@@ -658,7 +658,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 equals
-publicbooleanequals(http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Objectobj)
+publicbooleanequals(http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Objectobj)
 
 Overrides:
 http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#equals-java.lang.Object-;
 title="class or interface in java.lang">equalsin 
classhttp://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object
@@ -671,7 +671,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 extraHeapSize
-protectedlongextraHeapSize()
+protectedlongextraHeapSize()
 Description copied from 
class:Mutation
 Subclasses should override this method to add the heap size 
of their own fields.
 
@@ -688,7 +688,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 setAttribute
-publicIncrementsetAttribute(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname,
+publicIncrementsetAttribute(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname,
   byte[]value)
 Description copied from 
interface:Attributes
 Sets an attribute.
@@ -711,7 +711,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 setId
-publicIncrementsetId(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringid)
+publicIncrementsetId(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringid)
 Description copied from 
class:OperationWithAttributes
 This method allows you to set an identifier on an 
operation. The original
  motivation for this was to allow the identifier to be used in slow query
@@ -732,7 +732,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 setDurability
-publicIncrementsetDurability(Durabilityd)
+publicIncrementsetDurability(Durabilityd)
 Description copied from 
class:Mutation
 Set the durability for this mutation
 
@@ -747,7 +747,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl
 
 
 setFamilyCellMap
-publicIncrementsetFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+publicIncrementsetFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface 

[31/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html 
b/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
index cb1bcc7..3a27161 100644
--- a/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
+++ b/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
@@ -124,107 +124,107 @@
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true;
 title="class or interface in java.lang">Double
-RawDouble.decode(PositionedByteRangesrc)
+T
+DataType.decode(PositionedByteRangesrc)
+Read an instance of T from the buffer 
src.
+
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
-OrderedInt64.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true;
 title="class or interface in java.lang">Byte
+OrderedInt8.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true;
 title="class or interface in java.lang">Double
-OrderedFloat64.decode(PositionedByteRangesrc)
-
-
 http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true;
 title="class or interface in java.lang">Float
-RawFloat.decode(PositionedByteRangesrc)
-
-
-http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true;
 title="class or interface in java.lang">Short
-OrderedInt16.decode(PositionedByteRangesrc)
+OrderedFloat32.decode(PositionedByteRangesrc)
 
 
 byte[]
 OrderedBlobVar.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true;
 title="class or interface in java.lang">Byte
-OrderedInt8.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
+RawInteger.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true;
 title="class or interface in java.lang">Byte
-RawByte.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true;
 title="class or interface in java.lang">Float
+RawFloat.decode(PositionedByteRangesrc)
 
 
-T
-FixedLengthWrapper.decode(PositionedByteRangesrc)
+T
+TerminatedWrapper.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
-RawLong.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true;
 title="class or interface in java.lang">Double
+RawDouble.decode(PositionedByteRangesrc)
 
 
-T
-DataType.decode(PositionedByteRangesrc)
-Read an instance of T from the buffer 
src.
-
+http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true;
 title="class or interface in java.lang">Short
+RawShort.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Number.html?is-external=true;
 title="class or interface in java.lang">Number
-OrderedNumeric.decode(PositionedByteRangesrc)
-
-
 http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 OrderedString.decode(PositionedByteRangesrc)
 
-
-http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
-OrderedInt32.decode(PositionedByteRangesrc)
-
 
 byte[]
-OrderedBlob.decode(PositionedByteRangesrc)
+RawBytes.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true;
 title="class or interface in java.lang">Float
-OrderedFloat32.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
+RawLong.decode(PositionedByteRangesrc)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object[]
 Struct.decode(PositionedByteRangesrc)
 
 
-byte[]
-RawBytes.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true;
 title="class or interface in java.lang">Short
+OrderedInt16.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
-RawInteger.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true;
 title="class or interface in java.lang">Double
+OrderedFloat64.decode(PositionedByteRangesrc)
 
 
-T

[20/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html
index 1a6b0c2..c93b1f5 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html
@@ -32,650 +32,650 @@
 024import java.nio.ByteBuffer;
 025import java.util.ArrayList;
 026import java.util.Arrays;
-027import java.util.Comparator;
-028import java.util.List;
-029import java.util.Map;
-030import java.util.NavigableMap;
-031import java.util.TreeMap;
-032
-033import org.apache.hadoop.hbase.Cell;
-034import 
org.apache.hadoop.hbase.CellComparator;
-035import 
org.apache.hadoop.hbase.CellScannable;
-036import 
org.apache.hadoop.hbase.CellScanner;
-037import 
org.apache.hadoop.hbase.CellUtil;
-038import 
org.apache.hadoop.hbase.HConstants;
-039import 
org.apache.hadoop.hbase.KeyValue;
-040import 
org.apache.hadoop.hbase.KeyValueUtil;
-041import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-042import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-043import 
org.apache.hadoop.hbase.util.Bytes;
-044
-045/**
-046 * Single row result of a {@link Get} or 
{@link Scan} query.p
-047 *
-048 * This class is bNOT THREAD 
SAFE/b.p
+027import java.util.Collections;
+028import java.util.Comparator;
+029import java.util.Iterator;
+030import java.util.List;
+031import java.util.Map;
+032import java.util.NavigableMap;
+033import java.util.TreeMap;
+034
+035import org.apache.hadoop.hbase.Cell;
+036import 
org.apache.hadoop.hbase.CellComparator;
+037import 
org.apache.hadoop.hbase.CellScannable;
+038import 
org.apache.hadoop.hbase.CellScanner;
+039import 
org.apache.hadoop.hbase.CellUtil;
+040import 
org.apache.hadoop.hbase.HConstants;
+041import 
org.apache.hadoop.hbase.KeyValue;
+042import 
org.apache.hadoop.hbase.KeyValueUtil;
+043import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
+044import 
org.apache.hadoop.hbase.classification.InterfaceStability;
+045import 
org.apache.hadoop.hbase.util.Bytes;
+046
+047/**
+048 * Single row result of a {@link Get} or 
{@link Scan} query.p
 049 *
-050 * Convenience methods are available that 
return various {@link Map}
-051 * structures and values 
directly.p
-052 *
-053 * To get a complete mapping of all cells 
in the Result, which can include
-054 * multiple families and multiple 
versions, use {@link #getMap()}.p
-055 *
-056 * To get a mapping of each family to its 
columns (qualifiers and values),
-057 * including only the latest version of 
each, use {@link #getNoVersionMap()}.
-058 *
-059 * To get a mapping of qualifiers to 
latest values for an individual family use
-060 * {@link 
#getFamilyMap(byte[])}.p
-061 *
-062 * To get the latest value for a specific 
family and qualifier use
-063 * {@link #getValue(byte[], byte[])}.
-064 *
-065 * A Result is backed by an array of 
{@link Cell} objects, each representing
-066 * an HBase cell defined by the row, 
family, qualifier, timestamp, and value.p
-067 *
-068 * The underlying {@link Cell} objects 
can be accessed through the method {@link #listCells()}.
-069 * This will create a List from the 
internal Cell []. Better is to exploit the fact that
-070 * a new Result instance is a primed 
{@link CellScanner}; just call {@link #advance()} and
-071 * {@link #current()} to iterate over 
Cells as you would any {@link CellScanner}.
-072 * Call {@link #cellScanner()} to reset 
should you need to iterate the same Result over again
-073 * ({@link CellScanner}s are one-shot).
-074 *
-075 * If you need to overwrite a Result with 
another Result instance -- as in the old 'mapred'
-076 * RecordReader next invocations -- then 
create an empty Result with the null constructor and
-077 * in then use {@link 
#copyFrom(Result)}
-078 */
-079@InterfaceAudience.Public
-080@InterfaceStability.Stable
-081public class Result implements 
CellScannable, CellScanner {
-082  private Cell[] cells;
-083  private Boolean exists; // if the query 
was just to check existence.
-084  private boolean stale = false;
-085
-086  /**
-087   * See {@link 
#mayHaveMoreCellsInRow()}. And please notice that, The client side 
implementation
-088   * should also check for row key change 
to determine if a Result is the last one for a row.
-089   */
-090  private boolean mayHaveMoreCellsInRow = 
false;
-091  // We're not using java serialization.  
Transient here is just a marker to say
-092  // that this is where we cache row if 
we're ever asked for it.
-093  private transient byte [] row = null;
-094  // Ditto for familyMap.  It can be 
composed on fly from passed in kvs.
-095  private transient 
NavigableMapbyte[], NavigableMapbyte[], NavigableMapLong, 
byte[]
-096  familyMap = null;
-097
-098  private static 
ThreadLocalbyte[] 

[36/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html
index 1376754..cd88dcf 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html
@@ -130,42 +130,42 @@
 
 
 
-default ResultScanner
-AsyncTable.getScanner(byte[]family)
+ResultScanner
+Table.getScanner(byte[]family)
 Gets a scanner on the current table for the given 
family.
 
 
 
-ResultScanner
-Table.getScanner(byte[]family)
+default ResultScanner
+AsyncTable.getScanner(byte[]family)
 Gets a scanner on the current table for the given 
family.
 
 
 
-default ResultScanner
-AsyncTable.getScanner(byte[]family,
+ResultScanner
+Table.getScanner(byte[]family,
   byte[]qualifier)
 Gets a scanner on the current table for the given family 
and qualifier.
 
 
 
-ResultScanner
-Table.getScanner(byte[]family,
+default ResultScanner
+AsyncTable.getScanner(byte[]family,
   byte[]qualifier)
 Gets a scanner on the current table for the given family 
and qualifier.
 
 
 
 ResultScanner
-AsyncTable.getScanner(Scanscan)
-Returns a scanner on the current table as specified by the 
Scan 
object.
+Table.getScanner(Scanscan)
+Returns a scanner on the current table as specified by the 
Scan
+ object.
 
 
 
 ResultScanner
-Table.getScanner(Scanscan)
-Returns a scanner on the current table as specified by the 
Scan
- object.
+AsyncTable.getScanner(Scanscan)
+Returns a scanner on the current table as specified by the 
Scan 
object.
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html
index 7d66811..e45a76e 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html
@@ -179,19 +179,19 @@
 
 
 int
-Get.compareTo(Rowother)
+Mutation.compareTo(Rowd)
 
 
 int
-Mutation.compareTo(Rowd)
+RowMutations.compareTo(Rowi)
 
 
 int
-RowMutations.compareTo(Rowi)
+Increment.compareTo(Rowi)
 
 
 int
-Increment.compareTo(Rowi)
+Get.compareTo(Rowother)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html
index 29c4474..4b11626 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html
@@ -119,8 +119,8 @@
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
-AsyncTableBase.checkAndMutate(byte[]row,
+boolean
+Table.checkAndMutate(byte[]row,
   byte[]family,
   byte[]qualifier,
   CompareFilter.CompareOpcompareOp,
@@ -130,8 +130,8 @@
 
 
 
-boolean
-Table.checkAndMutate(byte[]row,
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true;
 title="class or interface in java.lang">Boolean
+AsyncTableBase.checkAndMutate(byte[]row,
   byte[]family,
   byte[]qualifier,
   CompareFilter.CompareOpcompareOp,
@@ -141,14 +141,14 @@
 
 
 
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void
-AsyncTableBase.mutateRow(RowMutationsmutation)
+void
+Table.mutateRow(RowMutationsrm)
 Performs multiple mutations atomically on a single 
row.
 
 
 
-void
-Table.mutateRow(RowMutationsrm)
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true;
 title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true;
 title="class or interface in java.lang">Void

[10/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html
index a8ad4e5..4d16641 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html
@@ -108,437 +108,437 @@
 100import 
com.google.common.collect.Multimap;
 101import 
com.google.common.collect.Multimaps;
 102import 
com.google.common.util.concurrent.ThreadFactoryBuilder;
-103
-104/**
-105 * Tool to load the output of 
HFileOutputFormat into an existing table.
-106 */
-107@InterfaceAudience.Public
-108@InterfaceStability.Stable
-109public class LoadIncrementalHFiles 
extends Configured implements Tool {
-110  private static final Log LOG = 
LogFactory.getLog(LoadIncrementalHFiles.class);
-111  private boolean initalized = false;
-112
-113  public static final String NAME = 
"completebulkload";
-114  static final String 
RETRY_ON_IO_EXCEPTION = "hbase.bulkload.retries.retryOnIOException";
-115  public static final String 
MAX_FILES_PER_REGION_PER_FAMILY
-116= 
"hbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily";
-117  private static final String 
ASSIGN_SEQ_IDS = "hbase.mapreduce.bulkload.assign.sequenceNumbers";
-118  public final static String 
CREATE_TABLE_CONF_KEY = "create.table";
-119  public final static String 
SILENCE_CONF_KEY = "ignore.unmatched.families";
-120  public final static String 
ALWAYS_COPY_FILES = "always.copy.files";
-121
-122  // We use a '.' prefix which is ignored 
when walking directory trees
-123  // above. It is invalid family name.
-124  final static String TMP_DIR = ".tmp";
-125
-126  private int 
maxFilesPerRegionPerFamily;
-127  private boolean assignSeqIds;
-128  private SetString 
unmatchedFamilies = new HashSetString();
-129
-130  // Source filesystem
-131  private FileSystem fs;
-132  // Source delegation token
-133  private FsDelegationToken 
fsDelegationToken;
-134  private String bulkToken;
-135  private UserProvider userProvider;
-136  private int nrThreads;
-137  private RpcControllerFactory 
rpcControllerFactory;
-138  private AtomicInteger numRetries;
-139
-140  private MapLoadQueueItem, 
ByteBuffer retValue = null;
-141
-142  public 
LoadIncrementalHFiles(Configuration conf) throws Exception {
-143super(conf);
-144this.rpcControllerFactory = new 
RpcControllerFactory(conf);
-145initialize();
-146  }
-147
-148  private void initialize() throws 
Exception {
-149if (initalized) {
-150  return;
-151}
-152// make a copy, just to be sure we're 
not overriding someone else's config
-153
setConf(HBaseConfiguration.create(getConf()));
-154Configuration conf = getConf();
-155// disable blockcache for tool 
invocation, see HBASE-10500
-156
conf.setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0);
-157this.userProvider = 
UserProvider.instantiate(conf);
-158this.fsDelegationToken = new 
FsDelegationToken(userProvider, "renewer");
-159assignSeqIds = 
conf.getBoolean(ASSIGN_SEQ_IDS, true);
-160maxFilesPerRegionPerFamily = 
conf.getInt(MAX_FILES_PER_REGION_PER_FAMILY, 32);
-161nrThreads = 
conf.getInt("hbase.loadincremental.threads.max",
-162  
Runtime.getRuntime().availableProcessors());
-163initalized = true;
-164numRetries = new AtomicInteger(1);
-165  }
-166
-167  private void usage() {
-168System.err.println("usage: " + NAME + 
" /path/to/hfileoutputformat-output tablename" + "\n -D"
-169+ CREATE_TABLE_CONF_KEY + "=no - 
can be used to avoid creation of table by this tool\n"
-170+ "  Note: if you set this to 
'no', then the target table must already exist in HBase\n -D"
-171+ SILENCE_CONF_KEY + "=yes - can 
be used to ignore unmatched column families\n"
-172+ "\n");
-173  }
-174
-175  private interface 
BulkHFileVisitorTFamily {
-176TFamily bulkFamily(final byte[] 
familyName)
-177  throws IOException;
-178void bulkHFile(final TFamily family, 
final FileStatus hfileStatus)
-179  throws IOException;
-180  }
-181
-182  /**
-183   * Iterate over the bulkDir hfiles.
-184   * Skip reference, HFileLink, files 
starting with "_" and non-valid hfiles.
-185   */
-186  private static TFamily void 
visitBulkHFiles(final FileSystem fs, final Path bulkDir,
-187final BulkHFileVisitorTFamily 
visitor) throws IOException {
-188visitBulkHFiles(fs, bulkDir, visitor, 
true);
-189  }
-190
-191  /**
-192   * Iterate over the bulkDir hfiles.
-193   * Skip reference, HFileLink, files 
starting with "_".
-194   * Check and skip non-valid hfiles by 
default, or skip this validation by setting
-195   * 
'hbase.loadincremental.validate.hfile' to false.
-196   */

[21/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html
index 057fcd3..782620c 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html
@@ -52,7 +52,7 @@
 044}
 045
 046if (attributes == null) {
-047  attributes = new HashMapString, 
byte[]();
+047  attributes = new 
HashMap();
 048}
 049
 050if (value == null) {

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html
index 45b8fb8..64126fa 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html
@@ -169,9 +169,9 @@
 161   */
 162  public Put(Put putToCopy) {
 163this(putToCopy.getRow(), 
putToCopy.ts);
-164this.familyMap = new TreeMapbyte 
[], ListCell(Bytes.BYTES_COMPARATOR);
+164this.familyMap = new 
TreeMap(Bytes.BYTES_COMPARATOR);
 165for(Map.Entrybyte [], 
ListCell entry: putToCopy.getFamilyCellMap().entrySet()) {
-166  this.familyMap.put(entry.getKey(), 
new ArrayListCell(entry.getValue()));
+166  this.familyMap.put(entry.getKey(), 
new ArrayList(entry.getValue()));
 167}
 168this.durability = 
putToCopy.durability;
 169for (Map.EntryString, byte[] 
entry : putToCopy.getAttributesMap().entrySet()) {
@@ -472,7 +472,7 @@
 464   * returns an empty list if one doesn't 
exist for the given family.
 465   */
 466  public ListCell get(byte[] 
family, byte[] qualifier) {
-467ListCell filteredList = new 
ArrayListCell();
+467ListCell filteredList = new 
ArrayList();
 468for (Cell cell: getCellList(family)) 
{
 469  if 
(CellUtil.matchingQualifier(cell, qualifier)) {
 470filteredList.add(cell);

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html
index 8c761aa..127619d 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html
@@ -64,7 +64,7 @@
 056
 057  /**
 058   * Apply the specified server-side 
filter when performing the Query.
-059   * Only {@link 
Filter#filterKeyValue(Cell)} is called AFTER all tests
+059   * Only {@link 
Filter#filterKeyValue(org.apache.hadoop.hbase.Cell)} is called AFTER all 
tests
 060   * for ttl, column match, deletes and 
max versions have been run.
 061   * @param filter filter to run on the 
server
 062   * @return this for invocation 
chaining



[01/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site c6ddb98fc -> fadf6d5a0


http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html 
b/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html
index 27b58df..c28cd5c 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html
@@ -405,7 +405,7 @@
 397LOG.info("Moving " + 
regionsToMove.size() + " regions to " + server + " using "
 398+ this.maxthreads + " threads.Ack 
mode:" + this.ack);
 399ExecutorService moveRegionsPool = 
Executors.newFixedThreadPool(this.maxthreads);
-400ListFutureBoolean 
taskList = new ArrayListFutureBoolean();
+400ListFutureBoolean 
taskList = new ArrayList();
 401int counter = 0;
 402while (counter  
regionsToMove.size()) {
 403  HRegionInfo region = 
regionsToMove.get(counter);
@@ -469,7 +469,7 @@
 461  justification="FB is wrong; its 
size is read")
 462  private void unloadRegions(Admin admin, 
String server, ArrayListString regionServers,
 463  boolean ack, 
ListHRegionInfo movedRegions) throws Exception {
-464ListHRegionInfo regionsToMove 
= new ArrayListHRegionInfo();// FindBugs: DLS_DEAD_LOCAL_STORE
+464ListHRegionInfo regionsToMove 
= new ArrayList();// FindBugs: DLS_DEAD_LOCAL_STORE
 465regionsToMove = getRegions(this.conf, 
server);
 466if (regionsToMove.isEmpty()) {
 467  LOG.info("No Regions to 
moveQuitting now");
@@ -489,7 +489,7 @@
 481  + regionServers.size() + " 
servers using " + this.maxthreads + " threads .Ack Mode:"
 482  + ack);
 483  ExecutorService moveRegionsPool = 
Executors.newFixedThreadPool(this.maxthreads);
-484  ListFutureBoolean 
taskList = new ArrayListFutureBoolean();
+484  ListFutureBoolean 
taskList = new ArrayList();
 485  int serverIndex = 0;
 486  while (counter  
regionsToMove.size()) {
 487if (ack) {
@@ -644,7 +644,7 @@
 636  }
 637
 638  private ListHRegionInfo 
readRegionsFromFile(String filename) throws IOException {
-639ListHRegionInfo regions = new 
ArrayListHRegionInfo();
+639ListHRegionInfo regions = new 
ArrayList();
 640File f = new File(filename);
 641if (!f.exists()) {
 642  return regions;
@@ -766,7 +766,7 @@
 758   * @return List of servers from the 
exclude file in format 'hostname:port'.
 759   */
 760  private ArrayListString 
readExcludes(String excludeFile) throws IOException {
-761ArrayListString 
excludeServers = new ArrayListString();
+761ArrayListString 
excludeServers = new ArrayList();
 762if (excludeFile == null) {
 763  return excludeServers;
 764} else {
@@ -829,184 +829,183 @@
 821   * @throws IOException
 822   */
 823  private ArrayListString 
getServers(Admin admin) throws IOException {
-824ArrayListServerName 
serverInfo =
-825new 
ArrayListServerName(admin.getClusterStatus().getServers());
-826ArrayListString regionServers 
= new ArrayListString(serverInfo.size());
-827for (ServerName server : serverInfo) 
{
-828  
regionServers.add(server.getServerName());
-829}
-830return regionServers;
-831  }
-832
-833  private void deleteFile(String 
filename) {
-834File f = new File(filename);
-835if (f.exists()) {
-836  f.delete();
-837}
-838  }
-839
-840  /**
-841   * Tries to scan a row from passed 
region
-842   * @param admin
-843   * @param region
-844   * @throws IOException
-845   */
-846  private void isSuccessfulScan(Admin 
admin, HRegionInfo region) throws IOException {
-847Scan scan = new 
Scan(region.getStartKey());
-848scan.setBatch(1);
-849scan.setCaching(1);
-850scan.setFilter(new 
FirstKeyOnlyFilter());
-851try {
-852  Table table = 
admin.getConnection().getTable(region.getTable());
-853  try {
-854ResultScanner scanner = 
table.getScanner(scan);
-855try {
-856  scanner.next();
-857} finally {
-858  scanner.close();
-859}
-860  } finally {
-861table.close();
-862  }
-863} catch (IOException e) {
-864  LOG.error("Could not scan region:" 
+ region.getEncodedName(), e);
-865  throw e;
-866}
-867  }
-868
-869  /**
-870   * Returns true if passed region is 
still on serverName when we look at hbase:meta.
-871   * @param admin
-872   * @param region
-873   * @param serverName
-874   * @return true if region is hosted on 
serverName otherwise false
-875   * @throws IOException
-876   */
-877  private boolean isSameServer(Admin 
admin, HRegionInfo region, String serverName)
-878  throws IOException {
-879String serverForRegion = 
getServerNameForRegion(admin, region);
-880if (serverForRegion != null 
 

[50/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/acid-semantics.html
--
diff --git a/acid-semantics.html b/acid-semantics.html
index d0b08af..a0c5972 100644
--- a/acid-semantics.html
+++ b/acid-semantics.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase   
   Apache HBase (TM) ACID Properties
@@ -262,9 +262,9 @@
 
   
 
-
-
-
+https://easychair.org/cfp/hbasecon2017; id="bannerLeft">
+   
 
+
   
 

 
@@ -618,7 +618,7 @@ under the License. -->
 https://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2017-02-17
+  Last Published: 
2017-03-21
 
 
 



[25/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html
index 16beebf..24b2bd0 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html
@@ -28,23 +28,23 @@
 020
 021import java.io.IOException;
 022import java.lang.reflect.Constructor;
-023import 
java.util.concurrent.ExecutorService;
-024
-025import 
org.apache.hadoop.conf.Configuration;
-026import 
org.apache.hadoop.hbase.HBaseConfiguration;
-027import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-028import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-029import 
org.apache.hadoop.hbase.security.User;
-030import 
org.apache.hadoop.hbase.security.UserProvider;
-031import 
org.apache.hadoop.hbase.util.ReflectionUtils;
-032
+023import 
java.util.concurrent.CompletableFuture;
+024import 
java.util.concurrent.ExecutorService;
+025
+026import 
org.apache.hadoop.conf.Configuration;
+027import 
org.apache.hadoop.hbase.HBaseConfiguration;
+028import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
+029import 
org.apache.hadoop.hbase.classification.InterfaceStability;
+030import 
org.apache.hadoop.hbase.security.User;
+031import 
org.apache.hadoop.hbase.security.UserProvider;
+032import 
org.apache.hadoop.hbase.util.ReflectionUtils;
 033
 034/**
-035 * A non-instantiable class that manages 
creation of {@link Connection}s.
-036 * Managing the lifecycle of the {@link 
Connection}s to the cluster is the responsibility of
-037 * the caller.
-038 * From a {@link Connection}, {@link 
Table} implementations are retrieved
-039 * with {@link 
Connection#getTable(TableName)}. Example:
+035 * A non-instantiable class that manages 
creation of {@link Connection}s. Managing the lifecycle of
+036 * the {@link Connection}s to the cluster 
is the responsibility of the caller. From a
+037 * {@link Connection}, {@link Table} 
implementations are retrieved with
+038 * {@link 
Connection#getTable(org.apache.hadoop.hbase.TableName)}. Example:
+039 *
 040 * pre
 041 * Connection connection = 
ConnectionFactory.createConnection(config);
 042 * Table table = 
connection.getTable(TableName.valueOf("table1"));
@@ -58,243 +58,250 @@
 050 *
 051 * Similarly, {@link Connection} also 
returns {@link Admin} and {@link RegionLocator}
 052 * implementations.
-053 *
-054 * @see Connection
-055 * @since 0.99.0
-056 */
-057@InterfaceAudience.Public
-058@InterfaceStability.Evolving
-059public class ConnectionFactory {
-060
-061  public static final String 
HBASE_CLIENT_ASYNC_CONNECTION_IMPL =
-062  
"hbase.client.async.connection.impl";
-063
-064  /** No public c.tors */
-065  protected ConnectionFactory() {
-066  }
-067
-068  /**
-069   * Create a new Connection instance 
using default HBaseConfiguration. Connection
-070   * encapsulates all housekeeping for a 
connection to the cluster. All tables and interfaces
-071   * created from returned connection 
share zookeeper connection, meta cache, and connections
-072   * to region servers and masters.
-073   * br
-074   * The caller is responsible for 
calling {@link Connection#close()} on the returned
-075   * connection instance.
-076   *
-077   * Typical usage:
-078   * pre
-079   * Connection connection = 
ConnectionFactory.createConnection();
-080   * Table table = 
connection.getTable(TableName.valueOf("mytable"));
-081   * try {
-082   *   table.get(...);
-083   *   ...
-084   * } finally {
-085   *   table.close();
-086   *   connection.close();
-087   * }
-088   * /pre
-089   *
-090   * @return Connection object for 
codeconf/code
-091   */
-092  public static Connection 
createConnection() throws IOException {
-093return 
createConnection(HBaseConfiguration.create(), null, null);
-094  }
-095
-096  /**
-097   * Create a new Connection instance 
using the passed codeconf/code instance. Connection
-098   * encapsulates all housekeeping for a 
connection to the cluster. All tables and interfaces
-099   * created from returned connection 
share zookeeper connection, meta cache, and connections
-100   * to region servers and masters.
-101   * br
-102   * The caller is responsible for 
calling {@link Connection#close()} on the returned
-103   * connection instance.
-104   *
-105   * Typical usage:
-106   * pre
-107   * Connection connection = 
ConnectionFactory.createConnection(conf);
-108   * Table table = 
connection.getTable(TableName.valueOf("mytable"));
-109   * try {
-110   *   table.get(...);
-111   *   ...
-112   * } finally {
-113   *   table.close();
-114   *   connection.close();
-115   * }
-116   * /pre
-117   *
-118   * @param conf configuration
-119   * @return Connection object 

[48/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apache_hbase_reference_guide.pdfmarks
--
diff --git a/apache_hbase_reference_guide.pdfmarks 
b/apache_hbase_reference_guide.pdfmarks
index f8eba53..9d8495c 100644
--- a/apache_hbase_reference_guide.pdfmarks
+++ b/apache_hbase_reference_guide.pdfmarks
@@ -2,8 +2,8 @@
   /Author (Apache HBase Team)
   /Subject ()
   /Keywords ()
-  /ModDate (D:20170217144957)
-  /CreationDate (D:20170217144957)
+  /ModDate (D:20170321142451)
+  /CreationDate (D:20170321142451)
   /Creator (Asciidoctor PDF 1.5.0.alpha.6, based on Prawn 1.2.1)
   /Producer ()
   /DOCINFO pdfmark

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/allclasses-frame.html
--
diff --git a/apidocs/allclasses-frame.html b/apidocs/allclasses-frame.html
index efd05d0..72c3a4e 100644
--- a/apidocs/allclasses-frame.html
+++ b/apidocs/allclasses-frame.html
@@ -249,6 +249,8 @@
 RawInteger
 RawLong
 RawScanResultConsumer
+RawScanResultConsumer.ScanController
+RawScanResultConsumer.ScanResumer
 RawShort
 RawString
 RawStringFixedLength
@@ -309,6 +311,7 @@
 ServerName
 ServerNotRunningYetException
 ServerTooBusyException
+ShortCircuitMasterConnection
 SimpleByteRange
 SimpleMutableByteRange
 SimplePositionedByteRange

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/allclasses-noframe.html
--
diff --git a/apidocs/allclasses-noframe.html b/apidocs/allclasses-noframe.html
index c64c1a8..6593100 100644
--- a/apidocs/allclasses-noframe.html
+++ b/apidocs/allclasses-noframe.html
@@ -249,6 +249,8 @@
 RawInteger
 RawLong
 RawScanResultConsumer
+RawScanResultConsumer.ScanController
+RawScanResultConsumer.ScanResumer
 RawShort
 RawString
 RawStringFixedLength
@@ -309,6 +311,7 @@
 ServerName
 ServerNotRunningYetException
 ServerTooBusyException
+ShortCircuitMasterConnection
 SimpleByteRange
 SimpleMutableByteRange
 SimplePositionedByteRange

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/constant-values.html
--
diff --git a/apidocs/constant-values.html b/apidocs/constant-values.html
index 178d0a2..040fc68 100644
--- a/apidocs/constant-values.html
+++ b/apidocs/constant-values.html
@@ -119,6 +119,13 @@
 "Replication"
 
 
+
+
+publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+SPARK
+"Spark"
+
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
@@ -3794,26 +3801,26 @@
 "create.table"
 
 
+
+
+publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+IGNORE_UNMATCHED_CF_CONF_KEY
+"ignore.unmatched.families"
+
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 MAX_FILES_PER_REGION_PER_FAMILY
 "hbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily"
 
-
+
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 NAME
 "completebulkload"
 
-
-
-
-publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
-SILENCE_CONF_KEY
-"ignore.unmatched.families"
-
 
 
 
@@ -4156,6 +4163,39 @@
 
 
 
+
+
+org.apache.hadoop.hbase.mapreduce.WALPlayer
+
+Modifier and Type
+Constant Field
+Value
+
+
+
+
+
+publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+BULK_OUTPUT_CONF_KEY
+"wal.bulk.output"
+
+
+
+
+publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+TABLE_MAP_KEY
+"wal.input.tablesmap"
+
+
+
+
+publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+TABLES_KEY
+"wal.input.tables"
+
+
+
+
 
 
 
@@ -4714,10 +4754,10 @@
 "default"
 
 
-
+
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
-NAMESPACEDESC_PROP_GROUP
+NAMESPACE_DESC_PROP_GROUP
 "hbase.rsgroup.name"
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/deprecated-list.html
--
diff --git a/apidocs/deprecated-list.html b/apidocs/deprecated-list.html
index eabb570..987f686 100644
--- 

[27/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html 
b/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html
index 45cd59a..66b1cdb 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html
@@ -48,9 +48,9 @@
 040import 
org.apache.hadoop.hbase.client.Durability;
 041import 
org.apache.hadoop.hbase.client.RegionReplicaUtil;
 042import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-043import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-044import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
-045import 
org.apache.hadoop.hbase.security.User;
+043import 
org.apache.hadoop.hbase.security.User;
+044import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+045import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema;
 046import 
org.apache.hadoop.hbase.util.Bytes;
 047
 048/**
@@ -72,1519 +72,1516 @@
 064   * includes values like IS_ROOT, 
IS_META, DEFERRED_LOG_FLUSH, SPLIT_POLICY,
 065   * MAX_FILE_SIZE, READONLY, 
MEMSTORE_FLUSHSIZE etc...
 066   */
-067  private final MapBytes, Bytes 
values =
-068  new HashMapBytes, 
Bytes();
-069
-070  /**
-071   * A map which holds the configuration 
specific to the table.
-072   * The keys of the map have the same 
names as config keys and override the defaults with
-073   * table-specific settings. Example 
usage may be for compactions, etc.
-074   */
-075  private final MapString, String 
configuration = new HashMapString, String();
-076
-077  public static final String SPLIT_POLICY 
= "SPLIT_POLICY";
-078
-079  /**
-080   * emINTERNAL/em Used 
by HBase Shell interface to access this metadata
-081   * attribute which denotes the maximum 
size of the store file after which
-082   * a region split occurs
-083   *
-084   * @see #getMaxFileSize()
-085   */
-086  public static final String MAX_FILESIZE 
= "MAX_FILESIZE";
-087  private static final Bytes 
MAX_FILESIZE_KEY =
-088  new 
Bytes(Bytes.toBytes(MAX_FILESIZE));
-089
-090  public static final String OWNER = 
"OWNER";
-091  public static final Bytes OWNER_KEY =
-092  new Bytes(Bytes.toBytes(OWNER));
-093
-094  /**
-095   * emINTERNAL/em Used 
by rest interface to access this metadata
-096   * attribute which denotes if the table 
is Read Only
-097   *
-098   * @see #isReadOnly()
-099   */
-100  public static final String READONLY = 
"READONLY";
-101  private static final Bytes READONLY_KEY 
=
-102  new 
Bytes(Bytes.toBytes(READONLY));
-103
-104  /**
-105   * emINTERNAL/em Used 
by HBase Shell interface to access this metadata
-106   * attribute which denotes if the table 
is compaction enabled
-107   *
-108   * @see #isCompactionEnabled()
-109   */
-110  public static final String 
COMPACTION_ENABLED = "COMPACTION_ENABLED";
-111  private static final Bytes 
COMPACTION_ENABLED_KEY =
-112  new 
Bytes(Bytes.toBytes(COMPACTION_ENABLED));
-113
-114  /**
-115   * emINTERNAL/em Used 
by HBase Shell interface to access this metadata
-116   * attribute which represents the 
maximum size of the memstore after which
-117   * its contents are flushed onto the 
disk
-118   *
-119   * @see #getMemStoreFlushSize()
-120   */
-121  public static final String 
MEMSTORE_FLUSHSIZE = "MEMSTORE_FLUSHSIZE";
-122  private static final Bytes 
MEMSTORE_FLUSHSIZE_KEY =
-123  new 
Bytes(Bytes.toBytes(MEMSTORE_FLUSHSIZE));
-124
-125  public static final String FLUSH_POLICY 
= "FLUSH_POLICY";
-126
-127  /**
-128   * emINTERNAL/em Used 
by rest interface to access this metadata
-129   * attribute which denotes if the table 
is a -ROOT- region or not
-130   *
-131   * @see #isRootRegion()
-132   */
-133  public static final String IS_ROOT = 
"IS_ROOT";
-134  private static final Bytes IS_ROOT_KEY 
=
-135  new 
Bytes(Bytes.toBytes(IS_ROOT));
-136
-137  /**
-138   * emINTERNAL/em Used 
by rest interface to access this metadata
-139   * attribute which denotes if it is a 
catalog table, either
-140   * code hbase:meta 
/code or code -ROOT- /code
-141   *
-142   * @see #isMetaRegion()
-143   */
-144  public static final String IS_META = 
"IS_META";
-145  private static final Bytes IS_META_KEY 
=
-146  new 
Bytes(Bytes.toBytes(IS_META));
-147
-148  /**
-149   * emINTERNAL/em Used 
by HBase Shell interface to access this metadata
-150   * attribute which denotes if the 
deferred log flush option is enabled.
-151   * @deprecated Use {@link #DURABILITY} 
instead.
-152   */
-153  @Deprecated
-154  public static final String 
DEFERRED_LOG_FLUSH = "DEFERRED_LOG_FLUSH";
-155  @Deprecated
-156  private static final Bytes 
DEFERRED_LOG_FLUSH_KEY =
-157  new 
Bytes(Bytes.toBytes(DEFERRED_LOG_FLUSH));
-158
-159  /**
-160   * 

[11/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html
index 73df1db..14e7609 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html
@@ -479,7 +479,7 @@
 471  }
 472
 473  private static ArrayListbyte[] 
toQuotedByteArrays(String... stringArgs) {
-474ArrayListbyte[] quotedArgs = 
new ArrayListbyte[]();
+474ArrayListbyte[] quotedArgs = 
new ArrayList();
 475for (String stringArg : stringArgs) 
{
 476  // all the filters' instantiation 
methods expected quoted args since they are coming from
 477  // the shell, so add them here, 
though it shouldn't really be needed :-/
@@ -544,7 +544,7 @@
 536  String[] allMappings = 
allMappingsPropVal.split(",");
 537  for (String mapping: allMappings) 
{
 538if(cfRenameMap == null) {
-539cfRenameMap = new 
TreeMapbyte[],byte[](Bytes.BYTES_COMPARATOR);
+539cfRenameMap = new 
TreeMap(Bytes.BYTES_COMPARATOR);
 540}
 541String [] srcAndDest = 
mapping.split(":");
 542if(srcAndDest.length != 2) {

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html
index 22656ce..69b099e 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html
@@ -257,7 +257,7 @@
 249public ParsedLine parse(byte[] 
lineBytes, int length)
 250throws BadTsvLineException {
 251  // Enumerate separator offsets
-252  ArrayListInteger tabOffsets 
= new ArrayListInteger(maxColumnCount);
+252  ArrayListInteger tabOffsets 
= new ArrayList(maxColumnCount);
 253  for (int i = 0; i  length; i++) 
{
 254if (lineBytes[i] == 
separatorByte) {
 255  tabOffsets.add(i);
@@ -456,7 +456,7 @@
 448  + " are less than row 
key position.");
 449}
 450  }
-451  return new PairInteger, 
Integer(startPos, endPos - startPos + 1);
+451  return new Pair(startPos, 
endPos - startPos + 1);
 452}
 453  }
 454
@@ -529,7 +529,7 @@
 521boolean noStrict = 
conf.getBoolean(NO_STRICT_COL_FAMILY, false);
 522// if no.strict is false then 
check column family
 523if(!noStrict) {
-524  ArrayListString 
unmatchedFamilies = new ArrayListString();
+524  ArrayListString 
unmatchedFamilies = new ArrayList();
 525  SetString cfSet = 
getColumnFamilies(columns);
 526  HTableDescriptor tDesc = 
table.getTableDescriptor();
 527  for (String cf : cfSet) {
@@ -538,7 +538,7 @@
 530}
 531  }
 532  if(unmatchedFamilies.size() 
 0) {
-533ArrayListString 
familyNames = new ArrayListString();
+533ArrayListString 
familyNames = new ArrayList();
 534for (HColumnDescriptor 
family : table.getTableDescriptor().getFamilies()) {
 535  
familyNames.add(family.getNameAsString());
 536}
@@ -634,7 +634,7 @@
 626  }
 627
 628  private static SetString 
getColumnFamilies(String[] columns) {
-629SetString cfSet = new 
HashSetString();
+629SetString cfSet = new 
HashSet();
 630for (String aColumn : columns) {
 631  if 
(TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)
 632  || 
TsvParser.TIMESTAMPKEY_COLUMN_SPEC.equals(aColumn)

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html
index 807f600..560a84d 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html
@@ -48,7 +48,7 @@
 040  protected void 
reduce(ImmutableBytesWritable row, java.lang.IterableKeyValue kvs,
 041  
org.apache.hadoop.mapreduce.ReducerImmutableBytesWritable, KeyValue, 
ImmutableBytesWritable, KeyValue.Context context)
 042  throws java.io.IOException, 
InterruptedException {
-043TreeSetKeyValue map = new 
TreeSetKeyValue(CellComparator.COMPARATOR);
+043TreeSetKeyValue map = new 
TreeSet(CellComparator.COMPARATOR);
 044  

[15/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html
index e5afc32..32c4a50 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html
@@ -124,309 +124,309 @@
 116  } else {
 117if (range.contains(rowArr, 
offset, length)) {
 118  currentReturnCode = 
ReturnCode.INCLUDE;
-119} else currentReturnCode = 
ReturnCode.SEEK_NEXT_USING_HINT;
-120  }
-121} else {
-122  currentReturnCode = 
ReturnCode.INCLUDE;
-123}
-124return false;
-125  }
-126
-127  @Override
-128  public ReturnCode filterKeyValue(Cell 
ignored) {
-129return currentReturnCode;
-130  }
-131
-132  @Override
-133  public Cell getNextCellHint(Cell 
currentKV) {
-134// skip to the next range's start 
row
-135return 
CellUtil.createFirstOnRow(range.startRow, 0,
-136(short) range.startRow.length);
-137  }
-138
-139  /**
-140   * @return The filter serialized using 
pb
-141   */
-142  public byte[] toByteArray() {
-143
FilterProtos.MultiRowRangeFilter.Builder builder = 
FilterProtos.MultiRowRangeFilter
-144.newBuilder();
-145for (RowRange range : rangeList) {
-146  if (range != null) {
-147FilterProtos.RowRange.Builder 
rangebuilder = FilterProtos.RowRange.newBuilder();
-148if (range.startRow != null)
-149  
rangebuilder.setStartRow(UnsafeByteOperations.unsafeWrap(range.startRow));
-150
rangebuilder.setStartRowInclusive(range.startRowInclusive);
-151if (range.stopRow != null)
-152  
rangebuilder.setStopRow(UnsafeByteOperations.unsafeWrap(range.stopRow));
-153
rangebuilder.setStopRowInclusive(range.stopRowInclusive);
-154range.isScan = 
Bytes.equals(range.startRow, range.stopRow) ? 1 : 0;
-155
builder.addRowRangeList(rangebuilder.build());
-156  }
-157}
-158return 
builder.build().toByteArray();
-159  }
-160
-161  /**
-162   * @param pbBytes A pb serialized 
instance
-163   * @return An instance of 
MultiRowRangeFilter
-164   * @throws 
org.apache.hadoop.hbase.exceptions.DeserializationException
-165   */
-166  public static MultiRowRangeFilter 
parseFrom(final byte[] pbBytes)
-167  throws DeserializationException {
-168FilterProtos.MultiRowRangeFilter 
proto;
-169try {
-170  proto = 
FilterProtos.MultiRowRangeFilter.parseFrom(pbBytes);
-171} catch 
(InvalidProtocolBufferException e) {
-172  throw new 
DeserializationException(e);
-173}
-174int length = 
proto.getRowRangeListCount();
-175ListFilterProtos.RowRange 
rangeProtos = proto.getRowRangeListList();
-176ListRowRange rangeList = new 
ArrayListRowRange(length);
-177for (FilterProtos.RowRange rangeProto 
: rangeProtos) {
-178  RowRange range = new 
RowRange(rangeProto.hasStartRow() ? rangeProto.getStartRow()
-179  .toByteArray() : null, 
rangeProto.getStartRowInclusive(), rangeProto.hasStopRow() ?
-180  
rangeProto.getStopRow().toByteArray() : null, 
rangeProto.getStopRowInclusive());
-181  rangeList.add(range);
-182}
-183return new 
MultiRowRangeFilter(rangeList);
-184  }
-185
-186  /**
-187   * @param o the filter to compare
-188   * @return true if and only if the 
fields of the filter that are serialized are equal to the
-189   * corresponding fields in 
other. Used for testing.
-190   */
-191  boolean areSerializedFieldsEqual(Filter 
o) {
-192if (o == this)
-193  return true;
-194if (!(o instanceof 
MultiRowRangeFilter))
-195  return false;
-196
-197MultiRowRangeFilter other = 
(MultiRowRangeFilter) o;
-198if (this.rangeList.size() != 
other.rangeList.size())
-199  return false;
-200for (int i = 0; i  
rangeList.size(); ++i) {
-201  RowRange thisRange = 
this.rangeList.get(i);
-202  RowRange otherRange = 
other.rangeList.get(i);
-203  if 
(!(Bytes.equals(thisRange.startRow, otherRange.startRow)  
Bytes.equals(
-204  thisRange.stopRow, 
otherRange.stopRow)  (thisRange.startRowInclusive ==
-205  otherRange.startRowInclusive) 
 (thisRange.stopRowInclusive ==
-206  otherRange.stopRowInclusive))) 
{
-207return false;
-208  }
-209}
-210return true;
-211  }
-212
-213  /**
-214   * calculate the position where the row 
key in the ranges list.
-215   *
-216   * @param rowKey the row key to 
calculate
-217   * @return index the position of the 
row key
-218   */
-219  private int getNextRangeIndex(byte[] 
rowKey) {
-220RowRange temp = new RowRange(rowKey, 
true, null, true);
-221int index = 

[35/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html 
b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
index 2b8632c..3d377a1 100644
--- a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
+++ b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
@@ -107,115 +107,115 @@
 
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterColumn(Cellcell)
+MultipleColumnPrefixFilter.filterColumn(Cellcell)
 
 
 Filter.ReturnCode
-MultipleColumnPrefixFilter.filterColumn(Cellcell)
+ColumnPrefixFilter.filterColumn(Cellcell)
 
 
 Filter.ReturnCode
-PrefixFilter.filterKeyValue(Cellv)
+FirstKeyOnlyFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-PageFilter.filterKeyValue(Cellignored)
+FamilyFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-KeyOnlyFilter.filterKeyValue(Cellignored)
+ColumnPaginationFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-WhileMatchFilter.filterKeyValue(Cellv)
+SingleColumnValueFilter.filterKeyValue(Cellc)
 
 
 Filter.ReturnCode
-QualifierFilter.filterKeyValue(Cellv)
+PageFilter.filterKeyValue(Cellignored)
 
 
 Filter.ReturnCode
-FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cellv)
-Deprecated.
-
+RowFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-SkipFilter.filterKeyValue(Cellv)
+ColumnRangeFilter.filterKeyValue(Cellkv)
 
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterKeyValue(Cellcell)
+ColumnCountGetFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-ColumnCountGetFilter.filterKeyValue(Cellv)
+FuzzyRowFilter.filterKeyValue(Cellc)
 
 
 Filter.ReturnCode
-DependentColumnFilter.filterKeyValue(Cellc)
+ValueFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-MultipleColumnPrefixFilter.filterKeyValue(Cellkv)
+DependentColumnFilter.filterKeyValue(Cellc)
 
 
+Filter.ReturnCode
+InclusiveStopFilter.filterKeyValue(Cellv)
+
+
 abstract Filter.ReturnCode
 Filter.filterKeyValue(Cellv)
 A way to filter based on the column family, column 
qualifier and/or the column value.
 
 
-
-Filter.ReturnCode
-FamilyFilter.filterKeyValue(Cellv)
-
 
 Filter.ReturnCode
 FilterList.filterKeyValue(Cellc)
 
 
 Filter.ReturnCode
-FirstKeyOnlyFilter.filterKeyValue(Cellv)
+PrefixFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-RowFilter.filterKeyValue(Cellv)
+RandomRowFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-ValueFilter.filterKeyValue(Cellv)
+MultipleColumnPrefixFilter.filterKeyValue(Cellkv)
 
 
 Filter.ReturnCode
-MultiRowRangeFilter.filterKeyValue(Cellignored)
+WhileMatchFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-InclusiveStopFilter.filterKeyValue(Cellv)
+KeyOnlyFilter.filterKeyValue(Cellignored)
 
 
 Filter.ReturnCode
-FuzzyRowFilter.filterKeyValue(Cellc)
+SkipFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-SingleColumnValueFilter.filterKeyValue(Cellc)
+TimestampsFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-RandomRowFilter.filterKeyValue(Cellv)
+QualifierFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-ColumnRangeFilter.filterKeyValue(Cellkv)
+ColumnPrefixFilter.filterKeyValue(Cellcell)
 
 
 Filter.ReturnCode
-ColumnPaginationFilter.filterKeyValue(Cellv)
+MultiRowRangeFilter.filterKeyValue(Cellignored)
 
 
 Filter.ReturnCode
-TimestampsFilter.filterKeyValue(Cellv)
+FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cellv)
+Deprecated.
+
 
 
 static Filter.ReturnCode

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html 
b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
index 4222c2b..f3179e3 100644
--- a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
+++ b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
@@ -160,15 +160,15 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 Scan.setFilter(Filterfilter)
 
 
-Get
-Get.setFilter(Filterfilter)
-
-
 Query
 Query.setFilter(Filterfilter)
 Apply the specified server-side filter when performing the 
Query.
 
 
+
+Get
+Get.setFilter(Filterfilter)
+
 
 
 
@@ -394,75 +394,75 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 static Filter
-PrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+FirstKeyOnlyFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter

[38/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/Scan.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/Scan.html 
b/apidocs/org/apache/hadoop/hbase/client/Scan.html
index 767c23c..a06d3ee 100644
--- a/apidocs/org/apache/hadoop/hbase/client/Scan.html
+++ b/apidocs/org/apache/hadoop/hbase/client/Scan.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":42,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":42,"i61":42,"i62":42,"i63":10,"i64":10,"i65":10,"i66":10,"i67":10,"i68":10,"i69":10,"i70":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":42,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":42,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":42,"i61":42,"i62":42,"i63":10,"i64":10,"i65":10,"i66":10,"i67":10,"i68":10,"i69":10,"i70":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"],32:["t6","Deprecated Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -400,7 +400,13 @@ extends 
 
 org.apache.hadoop.hbase.client.metrics.ScanMetrics
-getScanMetrics()
+getScanMetrics()
+Deprecated.
+Use ResultScanner.getScanMetrics()
 instead. And notice that, please do not
+ use this method and ResultScanner.getScanMetrics()
 together, the metrics
+ will be messed up.
+
+
 
 
 byte[]
@@ -472,8 +478,8 @@ extends 
 Scan
 setAllowPartialResults(booleanallowPartialResults)
-Setting whether the caller wants to see the partial results 
that may be returned from the
- server.
+Setting whether the caller wants to see the partial results 
when server returns
+ less-than-expected cells.
 
 
 
@@ -496,7 +502,7 @@ extends 
 Scan
 setBatch(intbatch)
-Set the maximum number of values to return for each call to 
next().
+Set the maximum number of cells to return for each call to 
next().
 
 
 
@@ -816,7 +822,7 @@ public static finalhttp://docs.oracle.com/javase/8/docs/api/java/
 
 
 HBASE_CLIENT_SCANNER_ASYNC_PREFETCH
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String HBASE_CLIENT_SCANNER_ASYNC_PREFETCH
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String HBASE_CLIENT_SCANNER_ASYNC_PREFETCH
 Parameter name for client scanner sync/async prefetch 
toggle.
  When using async scanner, prefetching data from the server is done at the 
background.
  The parameter currently won't have any effect in the case that the user has 
set
@@ -833,7 +839,7 @@ public static finalhttp://docs.oracle.com/javase/8/docs/api/java/
 
 
 DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH
-public static finalboolean DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH
+public static finalboolean DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH
 Default value of HBASE_CLIENT_SCANNER_ASYNC_PREFETCH.
 
 See Also:
@@ -855,7 +861,7 @@ public static finalhttp://docs.oracle.com/javase/8/docs/api/java/
 
 
 Scan
-publicScan()
+publicScan()
 Create a Scan operation across all rows.
 
 
@@ -866,7 +872,7 @@ public static finalhttp://docs.oracle.com/javase/8/docs/api/java/
 
 Scan
 http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true;
 title="class or interface in java.lang">@Deprecated
-publicScan(byte[]startRow,
+publicScan(byte[]startRow,
 Filterfilter)
 Deprecated.use new 
Scan().withStartRow(startRow).setFilter(filter) instead.
 
@@ -878,7 +884,7 @@ public
 Scan
 http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true;
 title="class or interface in java.lang">@Deprecated
-publicScan(byte[]startRow)
+publicScan(byte[]startRow)
 Deprecated.use new Scan().withStartRow(startRow) 
instead.
 Create a Scan operation starting at the specified row.
  
@@ -897,7 +903,7 @@ public
 Scan
 http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true;
 title="class or interface in java.lang">@Deprecated

[43/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html 
b/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html
index c89bc9e..490b465 100644
--- a/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html
+++ b/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html
@@ -263,7 +263,7 @@ the order they are declared.
 
 
 values
-public staticKeepDeletedCells[]values()
+public staticKeepDeletedCells[]values()
 Returns an array containing the constants of this enum 
type, in
 the order they are declared.  This method may be used to iterate
 over the constants as follows:
@@ -283,7 +283,7 @@ for (KeepDeletedCells c : KeepDeletedCells.values())
 
 
 valueOf
-public staticKeepDeletedCellsvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
+public staticKeepDeletedCellsvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 Returns the enum constant of this type with the specified 
name.
 The string must match exactly an identifier used to declare an
 enum constant in this type.  (Extraneous whitespace characters are 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html 
b/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html
index a949e6e..c8dbfda 100644
--- a/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html
+++ b/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html
@@ -356,7 +356,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 LOCAL
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String LOCAL
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String LOCAL
 local mode
 
 See Also:
@@ -370,7 +370,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 LOCAL_COLON
-public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String LOCAL_COLON
+public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String LOCAL_COLON
 'local:'
 
 See Also:
@@ -392,7 +392,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 LocalHBaseCluster
-publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf)
+publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf)
   throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Constructor.
 
@@ -409,7 +409,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 LocalHBaseCluster
-publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf,
+publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf,
  intnoRegionServers)
   throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Constructor.
@@ -429,7 +429,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 LocalHBaseCluster
-publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf,
+publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf,
  intnoMasters,
  intnoRegionServers)
   throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
@@ -451,7 +451,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 LocalHBaseCluster
-publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf,
+publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf,
  intnoMasters,
  intnoRegionServers,
  http://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true;
 title="class or interface in java.lang">Class? extends 
org.apache.hadoop.hbase.master.HMastermasterClass,
@@ -485,7 +485,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 addRegionServer
-publicorg.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThreadaddRegionServer()
+publicorg.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThreadaddRegionServer()
   

[47/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/index-all.html
--
diff --git a/apidocs/index-all.html b/apidocs/index-all.html
index d3283c2..31012a5 100644
--- a/apidocs/index-all.html
+++ b/apidocs/index-all.html
@@ -84,6 +84,8 @@
 
 abort a procedure
 
+abortProcedure(RpcController,
 MasterProtos.AbortProcedureRequest) - Method in class 
org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 abortProcedureAsync(long,
 boolean) - Method in interface org.apache.hadoop.hbase.client.Admin
 
 Abort a procedure but does not block and wait for it be 
completely removed.
@@ -156,7 +158,7 @@
 
 addAllServers(CollectionAddress)
 - Method in class org.apache.hadoop.hbase.rsgroup.RSGroupInfo
 
-Adds a group of servers.
+Adds the given servers to the group.
 
 addAllTables(CollectionTableName)
 - Method in class org.apache.hadoop.hbase.rsgroup.RSGroupInfo
 
@@ -208,6 +210,8 @@
 
 Get the column from the specified family with the specified 
qualifier.
 
+addColumn(RpcController,
 MasterProtos.AddColumnRequest) - Method in class 
org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 addColumnFamily(TableName,
 HColumnDescriptor) - Method in interface 
org.apache.hadoop.hbase.client.Admin
 
 Add a column family to an existing table.
@@ -384,6 +388,8 @@
 
 Add a new replication peer for replicating data to slave 
cluster
 
+addReplicationPeer(RpcController,
 ReplicationProtos.AddReplicationPeerRequest) - Method in class 
org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 Address - 
Class in org.apache.hadoop.hbase.net
 
 An immutable type to hold a hostname and port combo, like 
an Endpoint
@@ -393,7 +399,7 @@
 
 addServer(Address)
 - Method in class org.apache.hadoop.hbase.rsgroup.RSGroupInfo
 
-Adds the server to the group.
+Adds the given server to the group.
 
 addTable(TableName)
 - Method in class org.apache.hadoop.hbase.rsgroup.RSGroupInfo
 
@@ -516,6 +522,8 @@
 
 assign(byte[])
 - Method in interface org.apache.hadoop.hbase.client.Admin
 
+assignRegion(RpcController,
 MasterProtos.AssignRegionRequest) - Method in class 
org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 AsyncAdmin - Interface in org.apache.hadoop.hbase.client
 
 The asynchronous administrative API for HBase.
@@ -572,6 +580,8 @@
 
 BadAuthException(String,
 Throwable) - Constructor for exception 
org.apache.hadoop.hbase.ipc.BadAuthException
 
+balance(RpcController,
 MasterProtos.BalanceRequest) - Method in class 
org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 balancer()
 - Method in interface org.apache.hadoop.hbase.client.Admin
 
 Invoke the balancer.
@@ -803,6 +813,8 @@
 
 BULK_OUTPUT_CONF_KEY
 - Static variable in class org.apache.hadoop.hbase.mapreduce.ImportTsv
 
+BULK_OUTPUT_CONF_KEY
 - Static variable in class org.apache.hadoop.hbase.mapreduce.WALPlayer
+
 BULKLOAD_DIR_NAME
 - Static variable in class org.apache.hadoop.hbase.mob.MobConstants
 
 BULKLOAD_MAX_RETRIES_NUMBER
 - Static variable in class org.apache.hadoop.hbase.HConstants
@@ -1263,6 +1275,8 @@
 
 Closes the scanner and releases any resources it has 
allocated
 
+close()
 - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 close() - 
Method in interface org.apache.hadoop.hbase.client.Table
 
 Releases any resources held or pending changes in internal 
buffers.
@@ -2157,7 +2171,7 @@
 
 Takes a compareOperator symbol as a byte array and returns 
the corresponding CompareOperator
 
-createCompleteResult(ListResult)
 - Static method in class org.apache.hadoop.hbase.client.Result
+createCompleteResult(IterableResult)
 - Static method in class org.apache.hadoop.hbase.client.Result
 
 Forms a single result from the partial results in the 
partialResults list.
 
@@ -2314,6 +2328,12 @@
 
 Create a new namespace.
 
+createNamespace(NamespaceDescriptor)
 - Method in interface org.apache.hadoop.hbase.client.AsyncAdmin
+
+Create a new namespace.
+
+createNamespace(RpcController,
 MasterProtos.CreateNamespaceRequest) - Method in class 
org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 createNamespaceAsync(NamespaceDescriptor)
 - Method in interface org.apache.hadoop.hbase.client.Admin
 
 Create a new namespace
@@ -2409,6 +2429,8 @@
 
 Creates a new table with an initial set of empty regions 
defined by the specified split keys.
 
+createTable(RpcController,
 MasterProtos.CreateTableRequest) - Method in class 
org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 createTable(HTableDescriptor)
 - Method in class org.apache.hadoop.hbase.rest.client.RemoteAdmin
 
 Creates a new table.
@@ -3230,6 +3252,8 @@
  Use Admin.deleteColumnFamily(TableName,
 byte[])}.
 
 
+deleteColumn(RpcController,
 MasterProtos.DeleteColumnRequest) - Method in class 
org.apache.hadoop.hbase.client.ShortCircuitMasterConnection
+
 deleteColumnFamily(TableName,
 byte[]) - Method in interface 

[23/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html
index 0859dfa..88da4c0 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html
@@ -177,7 +177,7 @@
 169
 170// Create the failed puts list if 
necessary
 171if (failedPuts == null) {
-172  failedPuts = new 
ArrayListPut();
+172  failedPuts = new 
ArrayList();
 173}
 174// Add the put to the failed puts 
list
 175failedPuts.add(put);
@@ -296,10 +296,10 @@
 288  this.totalFailedPutCounter = 0;
 289  this.maxLatency = 0;
 290  this.overallAverageLatency = 0;
-291  this.serverToBufferedCounterMap = 
new HashMapString, Long();
-292  this.serverToFailedCounterMap = new 
HashMapString, Long();
-293  this.serverToAverageLatencyMap = 
new HashMapString, Long();
-294  this.serverToMaxLatencyMap = new 
HashMapString, Long();
+291  this.serverToBufferedCounterMap = 
new HashMap();
+292  this.serverToFailedCounterMap = new 
HashMap();
+293  this.serverToAverageLatencyMap = 
new HashMap();
+294  this.serverToMaxLatencyMap = new 
HashMap();
 295  
this.initialize(serverToFlushWorkerMap);
 296}
 297
@@ -420,7 +420,7 @@
 412}
 413
 414public synchronized 
SimpleEntryLong, Integer getComponents() {
-415  return new SimpleEntryLong, 
Integer(sum, count);
+415  return new SimpleEntry(sum, 
count);
 416}
 417
 418public synchronized void reset() {
@@ -622,7 +622,7 @@
 614  failedCount--;
 615} else {
 616  if (failed == null) {
-617failed = new 
ArrayListPutStatus();
+617failed = new 
ArrayList();
 618  }
 619  
failed.add(processingList.get(i));
 620}

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html
index 0859dfa..88da4c0 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html
@@ -177,7 +177,7 @@
 169
 170// Create the failed puts list if 
necessary
 171if (failedPuts == null) {
-172  failedPuts = new 
ArrayListPut();
+172  failedPuts = new 
ArrayList();
 173}
 174// Add the put to the failed puts 
list
 175failedPuts.add(put);
@@ -296,10 +296,10 @@
 288  this.totalFailedPutCounter = 0;
 289  this.maxLatency = 0;
 290  this.overallAverageLatency = 0;
-291  this.serverToBufferedCounterMap = 
new HashMapString, Long();
-292  this.serverToFailedCounterMap = new 
HashMapString, Long();
-293  this.serverToAverageLatencyMap = 
new HashMapString, Long();
-294  this.serverToMaxLatencyMap = new 
HashMapString, Long();
+291  this.serverToBufferedCounterMap = 
new HashMap();
+292  this.serverToFailedCounterMap = new 
HashMap();
+293  this.serverToAverageLatencyMap = 
new HashMap();
+294  this.serverToMaxLatencyMap = new 
HashMap();
 295  
this.initialize(serverToFlushWorkerMap);
 296}
 297
@@ -420,7 +420,7 @@
 412}
 413
 414public synchronized 
SimpleEntryLong, Integer getComponents() {
-415  return new SimpleEntryLong, 
Integer(sum, count);
+415  return new SimpleEntry(sum, 
count);
 416}
 417
 418public synchronized void reset() {
@@ -622,7 +622,7 @@
 614  failedCount--;
 615} else {
 616  if (failed == null) {
-617failed = new 
ArrayListPutStatus();
+617failed = new 
ArrayList();
 618  }
 619  
failed.add(processingList.get(i));
 620}

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html
index 89f978d..753dd06 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html
@@ -212,140 +212,139 @@
 204   */
 205  

[05/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html 
b/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
index 1365aec..351faa9 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
@@ -65,381 +65,381 @@
 057import 
org.apache.hadoop.hbase.client.Table;
 058import 
org.apache.hadoop.hbase.client.coprocessor.Batch;
 059import 
org.apache.hadoop.hbase.client.coprocessor.Batch.Callback;
-060import 
org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
-061import 
org.apache.hadoop.hbase.io.TimeRange;
-062import 
org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel;
-063import 
org.apache.hadoop.hbase.rest.Constants;
-064import 
org.apache.hadoop.hbase.rest.model.CellModel;
-065import 
org.apache.hadoop.hbase.rest.model.CellSetModel;
-066import 
org.apache.hadoop.hbase.rest.model.RowModel;
-067import 
org.apache.hadoop.hbase.rest.model.ScannerModel;
-068import 
org.apache.hadoop.hbase.rest.model.TableSchemaModel;
-069import 
org.apache.hadoop.hbase.util.Bytes;
-070import 
org.apache.hadoop.util.StringUtils;
-071
-072import com.google.protobuf.Descriptors;
-073import com.google.protobuf.Message;
-074import com.google.protobuf.Service;
-075import 
com.google.protobuf.ServiceException;
-076
-077/**
-078 * HTable interface to remote tables 
accessed via REST gateway
-079 */
-080@InterfaceAudience.Public
-081@InterfaceStability.Stable
-082public class RemoteHTable implements 
Table {
-083
-084  private static final Log LOG = 
LogFactory.getLog(RemoteHTable.class);
-085
-086  final Client client;
-087  final Configuration conf;
-088  final byte[] name;
-089  final int maxRetries;
-090  final long sleepTime;
-091
-092  @SuppressWarnings("rawtypes")
-093  protected String buildRowSpec(final 
byte[] row, final Map familyMap,
-094  final long startTime, final long 
endTime, final int maxVersions) {
-095StringBuffer sb = new 
StringBuffer();
-096sb.append('/');
-097sb.append(Bytes.toString(name));
-098sb.append('/');
-099sb.append(toURLEncodedBytes(row));
-100Set families = 
familyMap.entrySet();
-101if (families != null) {
-102  Iterator i = 
familyMap.entrySet().iterator();
-103  sb.append('/');
-104  while (i.hasNext()) {
-105Map.Entry e = 
(Map.Entry)i.next();
-106Collection quals = 
(Collection)e.getValue();
-107if (quals == null || 
quals.isEmpty()) {
-108  // this is an unqualified 
family. append the family name and NO ':'
-109  
sb.append(toURLEncodedBytes((byte[])e.getKey()));
-110} else {
-111  Iterator ii = 
quals.iterator();
-112  while (ii.hasNext()) {
-113
sb.append(toURLEncodedBytes((byte[])e.getKey()));
-114sb.append(':');
-115Object o = ii.next();
-116// Puts use byte[] but 
Deletes use KeyValue
-117if (o instanceof byte[]) {
-118  
sb.append(toURLEncodedBytes((byte[])o));
-119} else if (o instanceof 
KeyValue) {
-120  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
-121} else {
-122  throw new 
RuntimeException("object type not handled");
-123}
-124if (ii.hasNext()) {
-125  sb.append(',');
-126}
-127  }
-128}
-129if (i.hasNext()) {
-130  sb.append(',');
-131}
-132  }
-133}
-134if (startTime = 0  
endTime != Long.MAX_VALUE) {
-135  sb.append('/');
-136  sb.append(startTime);
-137  if (startTime != endTime) {
-138sb.append(',');
-139sb.append(endTime);
-140  }
-141} else if (endTime != Long.MAX_VALUE) 
{
-142  sb.append('/');
-143  sb.append(endTime);
-144}
-145if (maxVersions  1) {
-146  sb.append("?v=");
-147  sb.append(maxVersions);
-148}
-149return sb.toString();
-150  }
-151
-152  protected String 
buildMultiRowSpec(final byte[][] rows, int maxVersions) {
-153StringBuilder sb = new 
StringBuilder();
-154sb.append('/');
-155sb.append(Bytes.toString(name));
-156sb.append("/multiget/");
-157if (rows == null || rows.length == 0) 
{
-158  return sb.toString();
-159}
-160sb.append("?");
-161for(int i=0; irows.length; i++) 
{
-162  byte[] rk = rows[i];
-163  if (i != 0) {
-164sb.append('');
-165  }
-166  sb.append("row=");
-167  sb.append(toURLEncodedBytes(rk));
-168}
-169sb.append("v=");
-170sb.append(maxVersions);
-171
-172return sb.toString();
-173  }
-174
-175  protected Result[] 
buildResultFromModel(final CellSetModel model) {
-176ListResult results 

[12/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html
index 9c09190..07c69da 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html
@@ -145,7 +145,7 @@
 137
 138  static V extends Cell 
RecordWriterImmutableBytesWritable, V
 139  createRecordWriter(final 
TaskAttemptContext context) throws IOException {
-140return new 
HFileRecordWriterV(context, null);
+140return new 
HFileRecordWriter(context, null);
 141  }
 142
 143  protected static class 
HFileRecordWriterV extends Cell
@@ -219,7 +219,7 @@
 211overriddenEncoding = null;
 212  }
 213
-214  writers = new TreeMapbyte[], 
WriterLength(Bytes.BYTES_COMPARATOR);
+214  writers = new 
TreeMap(Bytes.BYTES_COMPARATOR);
 215  previousRow = 
HConstants.EMPTY_BYTE_ARRAY;
 216  now = 
Bytes.toBytes(EnvironmentEdgeManager.currentTime());
 217  rollRequested = false;
@@ -426,435 +426,429 @@
 418  private static 
ListImmutableBytesWritable getRegionStartKeys(RegionLocator table)
 419  throws IOException {
 420byte[][] byteKeys = 
table.getStartKeys();
-421
ArrayListImmutableBytesWritable ret =
-422  new 
ArrayListImmutableBytesWritable(byteKeys.length);
-423for (byte[] byteKey : byteKeys) {
-424  ret.add(new 
ImmutableBytesWritable(byteKey));
-425}
-426return ret;
-427  }
-428
-429  /**
-430   * Write out a {@link SequenceFile} 
that can be read by
-431   * {@link TotalOrderPartitioner} that 
contains the split points in startKeys.
-432   */
-433  @SuppressWarnings("deprecation")
-434  private static void 
writePartitions(Configuration conf, Path partitionsPath,
-435  ListImmutableBytesWritable 
startKeys) throws IOException {
-436LOG.info("Writing partition 
information to " + partitionsPath);
-437if (startKeys.isEmpty()) {
-438  throw new 
IllegalArgumentException("No regions passed");
-439}
-440
-441// We're generating a list of split 
points, and we don't ever
-442// have keys  the first region 
(which has an empty start key)
-443// so we need to remove it. Otherwise 
we would end up with an
-444// empty reducer with index 0
-445TreeSetImmutableBytesWritable 
sorted =
-446  new 
TreeSetImmutableBytesWritable(startKeys);
-447
-448ImmutableBytesWritable first = 
sorted.first();
-449if 
(!first.equals(HConstants.EMPTY_BYTE_ARRAY)) {
-450  throw new 
IllegalArgumentException(
-451  "First region of table should 
have empty start key. Instead has: "
-452  + 
Bytes.toStringBinary(first.get()));
-453}
-454sorted.remove(first);
-455
-456// Write the actual file
-457FileSystem fs = 
partitionsPath.getFileSystem(conf);
-458SequenceFile.Writer writer = 
SequenceFile.createWriter(
-459  fs, conf, partitionsPath, 
ImmutableBytesWritable.class,
-460  NullWritable.class);
-461
-462try {
-463  for (ImmutableBytesWritable 
startKey : sorted) {
-464writer.append(startKey, 
NullWritable.get());
-465  }
-466} finally {
-467  writer.close();
-468}
-469  }
-470
-471  /**
-472   * Configure a MapReduce Job to perform 
an incremental load into the given
-473   * table. This
-474   * ul
-475   *   liInspects the table to 
configure a total order partitioner/li
-476   *   liUploads the partitions 
file to the cluster and adds it to the DistributedCache/li
-477   *   liSets the number of 
reduce tasks to match the current number of regions/li
-478   *   liSets the output 
key/value class to match HFileOutputFormat2's requirements/li
-479   *   liSets the reducer up to 
perform the appropriate sorting (either KeyValueSortReducer or
-480   * PutSortReducer)/li
-481   * /ul
-482   * The user should be sure to set the 
map output value class to either KeyValue or Put before
-483   * running this function.
-484   */
-485  public static void 
configureIncrementalLoad(Job job, Table table, RegionLocator regionLocator)
-486  throws IOException {
-487configureIncrementalLoad(job, 
table.getTableDescriptor(), regionLocator);
-488  }
-489
-490  /**
-491   * Configure a MapReduce Job to perform 
an incremental load into the given
-492   * table. This
-493   * ul
-494   *   liInspects the table to 
configure a total order partitioner/li
-495   *   liUploads the partitions 
file to the cluster and adds it to the DistributedCache/li
-496   *   liSets the number of 
reduce tasks to match the current number of regions/li
-497   *   liSets the output 
key/value class to match HFileOutputFormat2's requirements/li
-498   *   liSets the reducer up to 
perform the 

[09/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html
index 39ebcf3..9340f9d 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html
@@ -73,41 +73,40 @@
 065final FileSystem fs = 
outputDir.getFileSystem(conf);
 066
 067// Map of tables to writers
-068final MapImmutableBytesWritable, 
RecordWriterImmutableBytesWritable, V tableWriters =
-069new 
HashMapImmutableBytesWritable, RecordWriterImmutableBytesWritable, 
V();
-070
-071return new 
RecordWriterImmutableBytesWritable, V() {
-072  @Override
-073  public void 
write(ImmutableBytesWritable tableName, V cell)
-074  throws IOException, 
InterruptedException {
-075
RecordWriterImmutableBytesWritable, V tableWriter = 
tableWriters.get(tableName);
-076// if there is new table, verify 
that table directory exists
-077if (tableWriter == null) {
-078  // using table name as 
directory name
-079  final Path tableOutputDir = new 
Path(outputDir, Bytes.toString(tableName.copyBytes()));
-080  fs.mkdirs(tableOutputDir);
-081  LOG.info("Writing Table '" + 
tableName.toString() + "' data into following directory"
-082  + 
tableOutputDir.toString());
-083
-084  // Create writer for one 
specific table
-085  tableWriter = new 
HFileOutputFormat2.HFileRecordWriterV(context, tableOutputDir);
-086  // Put table into map
-087  tableWriters.put(tableName, 
tableWriter);
-088}
-089// Write Row, Cell into 
tableWriter
-090// in the original code, it does 
not use Row
-091tableWriter.write(null, cell);
-092  }
-093
-094  @Override
-095  public void 
close(TaskAttemptContext c) throws IOException, InterruptedException {
-096for 
(RecordWriterImmutableBytesWritable, V writer : tableWriters.values()) 
{
-097  writer.close(c);
-098}
-099  }
-100};
-101  }
-102}
+068final MapImmutableBytesWritable, 
RecordWriterImmutableBytesWritable, V tableWriters = new 
HashMap();
+069
+070return new 
RecordWriterImmutableBytesWritable, V() {
+071  @Override
+072  public void 
write(ImmutableBytesWritable tableName, V cell)
+073  throws IOException, 
InterruptedException {
+074
RecordWriterImmutableBytesWritable, V tableWriter = 
tableWriters.get(tableName);
+075// if there is new table, verify 
that table directory exists
+076if (tableWriter == null) {
+077  // using table name as 
directory name
+078  final Path tableOutputDir = new 
Path(outputDir, Bytes.toString(tableName.copyBytes()));
+079  fs.mkdirs(tableOutputDir);
+080  LOG.info("Writing Table '" + 
tableName.toString() + "' data into following directory"
+081  + 
tableOutputDir.toString());
+082
+083  // Create writer for one 
specific table
+084  tableWriter = new 
HFileOutputFormat2.HFileRecordWriter(context, tableOutputDir);
+085  // Put table into map
+086  tableWriters.put(tableName, 
tableWriter);
+087}
+088// Write Row, Cell into 
tableWriter
+089// in the original code, it does 
not use Row
+090tableWriter.write(null, cell);
+091  }
+092
+093  @Override
+094  public void 
close(TaskAttemptContext c) throws IOException, InterruptedException {
+095for 
(RecordWriterImmutableBytesWritable, V writer : tableWriters.values()) 
{
+096  writer.close(c);
+097}
+098  }
+099};
+100  }
+101}
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html
index f4e33a0..bb2a823 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html
@@ -100,7 +100,7 @@
 092  throw new 
IllegalArgumentException("There must be at least 1 scan configuration set to : 
"
 093  + SCANS);
 094}
-095ListScan scans = new 
ArrayListScan();
+095ListScan scans = new 
ArrayList();
 096
 097for (int i = 0; i  
rawScans.length; i++) {
 098  try {


[17/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html
index 7f2ddf1..c6e88dd 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html
@@ -27,33 +27,42 @@
 019
 020import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
 021import 
org.apache.hadoop.hbase.classification.InterfaceStability;
-022
-023/**
-024 * Receives {@link Result} for an 
asynchronous scan.
-025 */
-026@InterfaceAudience.Public
-027@InterfaceStability.Unstable
-028public interface ScanResultConsumer {
-029
-030  /**
-031   * @param result the data fetched from 
HBase service.
-032   * @return {@code false} if you want to 
terminate the scan process. Otherwise {@code true}
-033   */
-034  boolean onNext(Result result);
-035
-036  /**
-037   * Indicate that we hit an 
unrecoverable error and the scan operation is terminated.
-038   * p
-039   * We will not call {@link 
#onComplete()} after calling {@link #onError(Throwable)}.
-040   */
-041  void onError(Throwable error);
-042
-043  /**
-044   * Indicate that the scan operation is 
completed normally.
-045   */
-046  void onComplete();
-047
-048}
+022import 
org.apache.hadoop.hbase.client.metrics.ScanMetrics;
+023
+024/**
+025 * Receives {@link Result} for an 
asynchronous scan.
+026 */
+027@InterfaceAudience.Public
+028@InterfaceStability.Unstable
+029public interface ScanResultConsumer {
+030
+031  /**
+032   * @param result the data fetched from 
HBase service.
+033   * @return {@code false} if you want to 
terminate the scan process. Otherwise {@code true}
+034   */
+035  boolean onNext(Result result);
+036
+037  /**
+038   * Indicate that we hit an 
unrecoverable error and the scan operation is terminated.
+039   * p
+040   * We will not call {@link 
#onComplete()} after calling {@link #onError(Throwable)}.
+041   */
+042  void onError(Throwable error);
+043
+044  /**
+045   * Indicate that the scan operation is 
completed normally.
+046   */
+047  void onComplete();
+048
+049  /**
+050   * If {@code 
scan.isScanMetricsEnabled()} returns true, then this method will be called 
prior to
+051   * all other methods in this interface 
to give you the {@link ScanMetrics} instance for this scan
+052   * operation. The {@link ScanMetrics} 
instance will be updated on-the-fly during the scan, you can
+053   * store it somewhere to get the 
metrics at any time if you want.
+054   */
+055  default void 
onScanMetricsCreated(ScanMetrics scanMetrics) {
+056  }
+057}
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html
index f4ced21..11f3dbd 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html
@@ -135,7 +135,7 @@
 127final ListHRegionInfo 
restoredRegions = meta.getRegionsToAdd();
 128
 129htd = meta.getTableDescriptor();
-130regions = new 
ArrayListHRegionInfo(restoredRegions.size());
+130regions = new 
ArrayList(restoredRegions.size());
 131for (HRegionInfo hri: 
restoredRegions) {
 132  if 
(CellUtil.overlappingKeys(scan.getStartRow(), scan.getStopRow(),
 133  hri.getStartKey(), 
hri.getEndKey())) {

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html
index b2f1221..b9503b7 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html
@@ -26,49 +26,49 @@
 018 */
 019package 
org.apache.hadoop.hbase.client.replication;
 020
-021import 
com.google.common.annotations.VisibleForTesting;
-022import com.google.common.collect.Lists;
-023
-024import java.io.Closeable;
-025import java.io.IOException;
-026import java.util.ArrayList;
-027import java.util.Collection;
-028import java.util.HashMap;
-029import java.util.HashSet;
-030import java.util.List;
-031import java.util.Map;
-032import java.util.TreeMap;
-033import java.util.Map.Entry;
-034import 

[08/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html
index 7003255..a2b0cb0 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html
@@ -145,523 +145,522 @@
 137
 138  
 139  /** The reverse DNS lookup cache 
mapping: IPAddress = HostName */
-140  private HashMapInetAddress, 
String reverseDNSCacheMap =
-141new HashMapInetAddress, 
String();
-142
-143  /**
-144   * Builds a {@link TableRecordReader}. 
If no {@link TableRecordReader} was provided, uses
-145   * the default.
-146   *
-147   * @param split  The split to work 
with.
-148   * @param context  The current 
context.
-149   * @return The newly created record 
reader.
-150   * @throws IOException When creating 
the reader fails.
-151   * @see 
org.apache.hadoop.mapreduce.InputFormat#createRecordReader(
-152   *   
org.apache.hadoop.mapreduce.InputSplit,
-153   *   
org.apache.hadoop.mapreduce.TaskAttemptContext)
-154   */
-155  @Override
-156  public 
RecordReaderImmutableBytesWritable, Result createRecordReader(
-157  InputSplit split, 
TaskAttemptContext context)
-158  throws IOException {
-159// Just in case a subclass is relying 
on JobConfigurable magic.
-160if (table == null) {
-161  initialize(context);
-162}
-163// null check in case our child 
overrides getTable to not throw.
-164try {
-165  if (getTable() == null) {
-166// initialize() must not have 
been implemented in the subclass.
-167throw new 
IOException(INITIALIZATION_ERROR);
-168  }
-169} catch (IllegalStateException 
exception) {
-170  throw new 
IOException(INITIALIZATION_ERROR, exception);
-171}
-172TableSplit tSplit = (TableSplit) 
split;
-173LOG.info("Input split length: " + 
StringUtils.humanReadableInt(tSplit.getLength()) + " bytes.");
-174final TableRecordReader trr =
-175this.tableRecordReader != null ? 
this.tableRecordReader : new TableRecordReader();
-176Scan sc = new Scan(this.scan);
-177
sc.setStartRow(tSplit.getStartRow());
-178sc.setStopRow(tSplit.getEndRow());
-179trr.setScan(sc);
-180trr.setTable(getTable());
-181return new 
RecordReaderImmutableBytesWritable, Result() {
-182
-183  @Override
-184  public void close() throws 
IOException {
-185trr.close();
-186closeTable();
-187  }
-188
-189  @Override
-190  public ImmutableBytesWritable 
getCurrentKey() throws IOException, InterruptedException {
-191return trr.getCurrentKey();
-192  }
-193
-194  @Override
-195  public Result getCurrentValue() 
throws IOException, InterruptedException {
-196return trr.getCurrentValue();
-197  }
-198
-199  @Override
-200  public float getProgress() throws 
IOException, InterruptedException {
-201return trr.getProgress();
-202  }
-203
-204  @Override
-205  public void initialize(InputSplit 
inputsplit, TaskAttemptContext context) throws IOException,
-206  InterruptedException {
-207trr.initialize(inputsplit, 
context);
-208  }
-209
-210  @Override
-211  public boolean nextKeyValue() 
throws IOException, InterruptedException {
-212return trr.nextKeyValue();
-213  }
-214};
-215  }
-216
-217  protected Pairbyte[][],byte[][] 
getStartEndKeys() throws IOException {
-218return 
getRegionLocator().getStartEndKeys();
-219  }
-220
-221  /**
-222   * Calculates the splits that will 
serve as input for the map tasks. The
-223   * number of splits matches the number 
of regions in a table.
-224   *
-225   * @param context  The current job 
context.
-226   * @return The list of input splits.
-227   * @throws IOException When creating 
the list of splits fails.
-228   * @see 
org.apache.hadoop.mapreduce.InputFormat#getSplits(
-229   *   
org.apache.hadoop.mapreduce.JobContext)
-230   */
-231  @Override
-232  public ListInputSplit 
getSplits(JobContext context) throws IOException {
-233boolean closeOnFinish = false;
-234
-235// Just in case a subclass is relying 
on JobConfigurable magic.
-236if (table == null) {
-237  initialize(context);
-238  closeOnFinish = true;
-239}
-240
-241// null check in case our child 
overrides getTable to not throw.
-242try {
-243  if (getTable() == null) {
-244// initialize() must not have 
been implemented in the subclass.
-245throw new 
IOException(INITIALIZATION_ERROR);
-246  }
-247} catch (IllegalStateException 
exception) {
-248  throw new 
IOException(INITIALIZATION_ERROR, exception);
-249}
-250
-251

[02/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html 
b/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html
index 608d19a..f42fb90 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html
@@ -606,119 +606,118 @@
 598Path snapshotDir = 
SnapshotDescriptionUtils.getSnapshotsDir(rootDir);
 599FileStatus[] snapshots = 
fs.listStatus(snapshotDir,
 600new 
SnapshotDescriptionUtils.CompletedSnaphotDirectoriesFilter(fs));
-601ListSnapshotDescription 
snapshotLists =
-602  new 
ArrayListSnapshotDescription(snapshots.length);
-603for (FileStatus snapshotDirStat: 
snapshots) {
-604  HBaseProtos.SnapshotDescription 
snapshotDesc =
-605  
SnapshotDescriptionUtils.readSnapshotInfo(fs, snapshotDirStat.getPath());
-606  
snapshotLists.add(ProtobufUtil.createSnapshotDesc(snapshotDesc));
-607}
-608return snapshotLists;
-609  }
-610
-611  /**
-612   * Gets the store files map for 
snapshot
-613   * @param conf the {@link 
Configuration} to use
-614   * @param snapshot {@link 
SnapshotDescription} to get stats from
-615   * @param exec the {@link 
ExecutorService} to use
-616   * @param filesMap {@link Map} the map 
to put the mapping entries
-617   * @param uniqueHFilesArchiveSize 
{@link AtomicLong} the accumulated store file size in archive
-618   * @param uniqueHFilesSize {@link 
AtomicLong} the accumulated store file size shared
-619   * @param uniqueHFilesMobSize {@link 
AtomicLong} the accumulated mob store file size shared
-620   * @return the snapshot stats
-621   */
-622  private static void 
getSnapshotFilesMap(final Configuration conf,
-623  final SnapshotDescription snapshot, 
final ExecutorService exec,
-624  final ConcurrentHashMapPath, 
Integer filesMap,
-625  final AtomicLong 
uniqueHFilesArchiveSize, final AtomicLong uniqueHFilesSize,
-626  final AtomicLong 
uniqueHFilesMobSize) throws IOException {
-627HBaseProtos.SnapshotDescription 
snapshotDesc =
-628
ProtobufUtil.createHBaseProtosSnapshotDesc(snapshot);
-629Path rootDir = 
FSUtils.getRootDir(conf);
-630final FileSystem fs = 
FileSystem.get(rootDir.toUri(), conf);
-631
-632Path snapshotDir = 
SnapshotDescriptionUtils.getCompletedSnapshotDir(snapshotDesc, rootDir);
-633SnapshotManifest manifest = 
SnapshotManifest.open(conf, fs, snapshotDir, snapshotDesc);
-634
SnapshotReferenceUtil.concurrentVisitReferencedFiles(conf, fs, manifest, 
exec,
-635new 
SnapshotReferenceUtil.SnapshotVisitor() {
-636  @Override public void 
storeFile(final HRegionInfo regionInfo, final String family,
-637  final 
SnapshotRegionManifest.StoreFile storeFile) throws IOException {
-638if 
(!storeFile.hasReference()) {
-639  HFileLink link = 
HFileLink.build(conf, snapshot.getTableName(),
-640  
regionInfo.getEncodedName(), family, storeFile.getName());
-641  long size;
-642  Integer count;
-643  Path p;
-644  AtomicLong al;
-645  int c = 0;
-646
-647  if 
(fs.exists(link.getArchivePath())) {
-648p = 
link.getArchivePath();
-649al = 
uniqueHFilesArchiveSize;
-650size = 
fs.getFileStatus(p).getLen();
-651  } else if 
(fs.exists(link.getMobPath())) {
-652p = link.getMobPath();
-653al = 
uniqueHFilesMobSize;
-654size = 
fs.getFileStatus(p).getLen();
-655  } else {
-656p = 
link.getOriginPath();
-657al = uniqueHFilesSize;
-658size = 
link.getFileStatus(fs).getLen();
-659  }
-660
-661  // If it has been counted, 
do not double count
-662  count = filesMap.get(p);
-663  if (count != null) {
-664c = count.intValue();
-665  } else {
-666al.addAndGet(size);
-667  }
-668
-669  filesMap.put(p, ++c);
-670}
-671  }
-672});
-673  }
-674
-675  /**
-676   * Returns the map of store files based 
on path for all snapshots
-677   * @param conf the {@link 
Configuration} to use
-678   * @param uniqueHFilesArchiveSize pass 
out the size for store files in archive
-679   * @param uniqueHFilesSize pass out the 
size for store files shared
-680   * @param uniqueHFilesMobSize pass out 
the size for mob store files shared
-681   * @return the map of store files
-682   */
-683  public static MapPath, Integer 
getSnapshotsFilesMap(final Configuration conf,
-684  AtomicLong uniqueHFilesArchiveSize, 
AtomicLong 

[04/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.

2017-03-21 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html 
b/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html
index 96deb2d..83f9fed 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html
@@ -42,152 +42,143 @@
 034@InterfaceAudience.Public
 035@InterfaceStability.Evolving
 036public class RSGroupInfo {
-037
-038  public static final String 
DEFAULT_GROUP = "default";
-039  public static final String 
NAMESPACEDESC_PROP_GROUP = "hbase.rsgroup.name";
-040
-041  private String name;
-042  // Keep servers in a sorted set so has 
an expected ordering when displayed.
-043  private SortedSetAddress 
servers;
-044  // Keep tables sorted too.
-045  private SortedSetTableName 
tables;
-046
-047  public RSGroupInfo(String name) {
-048this(name, new 
TreeSetAddress(), new TreeSetTableName());
-049  }
-050
-051  RSGroupInfo(String name, 
SortedSetAddress servers, SortedSetTableName tables) {
-052this.name = name;
-053this.servers = servers == null? new 
TreeSetAddress(): servers;
-054this.servers.addAll(servers);
-055this.tables = new 
TreeSet(tables);
-056  }
-057
-058  public RSGroupInfo(RSGroupInfo src) {
-059this(src.getName(), src.servers, 
src.tables);
-060  }
-061
-062  /**
-063   * Get group name.
-064   *
-065   * @return group name
-066   */
-067  public String getName() {
-068return name;
-069  }
-070
-071  /**
-072   * Adds the server to the group.
-073   *
-074   * @param hostPort the server
-075   */
-076  public void addServer(Address 
hostPort){
-077servers.add(hostPort);
-078  }
-079
-080  /**
-081   * Adds a group of servers.
-082   *
-083   * @param hostPort the servers
-084   */
-085  public void 
addAllServers(CollectionAddress hostPort){
-086servers.addAll(hostPort);
-087  }
-088
-089  /**
-090   * @param hostPort hostPort of the 
server
-091   * @return true, if a server with 
hostPort is found
+037  public static final String 
DEFAULT_GROUP = "default";
+038  public static final String 
NAMESPACE_DESC_PROP_GROUP = "hbase.rsgroup.name";
+039
+040  private final String name;
+041  // Keep servers in a sorted set so has 
an expected ordering when displayed.
+042  private final SortedSetAddress 
servers;
+043  // Keep tables sorted too.
+044  private final 
SortedSetTableName tables;
+045
+046  public RSGroupInfo(String name) {
+047this(name, new 
TreeSetAddress(), new TreeSetTableName());
+048  }
+049
+050  RSGroupInfo(String name, 
SortedSetAddress servers, SortedSetTableName tables) {
+051this.name = name;
+052this.servers = servers == null? new 
TreeSet(): servers;
+053this.servers.addAll(servers);
+054this.tables = new 
TreeSet(tables);
+055  }
+056
+057  public RSGroupInfo(RSGroupInfo src) {
+058this(src.getName(), src.servers, 
src.tables);
+059  }
+060
+061  /**
+062   * Get group name.
+063   */
+064  public String getName() {
+065return name;
+066  }
+067
+068  /**
+069   * Adds the given server to the 
group.
+070   */
+071  public void addServer(Address 
hostPort){
+072servers.add(hostPort);
+073  }
+074
+075  /**
+076   * Adds the given servers to the 
group.
+077   */
+078  public void 
addAllServers(CollectionAddress hostPort){
+079servers.addAll(hostPort);
+080  }
+081
+082  /**
+083   * @param hostPort hostPort of the 
server
+084   * @return true, if a server with 
hostPort is found
+085   */
+086  public boolean containsServer(Address 
hostPort) {
+087return servers.contains(hostPort);
+088  }
+089
+090  /**
+091   * Get list of servers.
 092   */
-093  public boolean containsServer(Address 
hostPort) {
-094return servers.contains(hostPort);
+093  public SetAddress getServers() 
{
+094return servers;
 095  }
 096
 097  /**
-098   * Get list of servers.
-099   *
-100   * @return set of servers
-101   */
-102  public SetAddress getServers() 
{
-103return servers;
-104  }
-105
-106  /**
-107   * Remove a server from this group.
-108   *
-109   * @param hostPort HostPort of the 
server to remove
-110   */
-111  public boolean removeServer(Address 
hostPort) {
-112return servers.remove(hostPort);
+098   * Remove given server from the 
group.
+099   */
+100  public boolean removeServer(Address 
hostPort) {
+101return servers.remove(hostPort);
+102  }
+103
+104  /**
+105   * Get set of tables that are members 
of the group.
+106   */
+107  public SortedSetTableName 
getTables() {
+108return tables;
+109  }
+110
+111  public void addTable(TableName table) 
{
+112tables.add(table);
 113  }
 114
-115  /**
-116   * Set of tables that are members of 
this group
-117   * @return set of tables
-118   */
-119  public SortedSetTableName 
getTables() {
-120

[hbase] Git Push Summary

2016-09-27 Thread misty
Repository: hbase
Updated Tags:  refs/tags/hbase-1.1.7RC0 [deleted] 2a744083d


[hbase] Git Push Summary

2016-09-27 Thread misty
Repository: hbase
Updated Tags:  refs/tags/1.1.7RC0 [created] ea395bfdc


[hbase] Git Push Summary

2016-09-27 Thread misty
Repository: hbase
Updated Tags:  refs/tags/hbase-1.1.7RC0 [created] 2a744083d


svn commit: r15777 - /dev/hbase/hbase-1.1.7RC0/

2016-09-27 Thread misty
Author: misty
Date: Tue Sep 27 20:21:04 2016
New Revision: 15777

Log:
First RC for hbase-1.1.7

Added:
dev/hbase/hbase-1.1.7RC0/
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz   (with props)
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.asc
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.md5
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.mds
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.sha
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-src.tar.gz   (with props)
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-src.tar.gz.asc
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-src.tar.gz.md5
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-src.tar.gz.mds
dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-src.tar.gz.sha

Added: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz
==
Binary file - no diff available.

Propchange: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.asc
==
--- dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.asc (added)
+++ dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.asc Tue Sep 27 20:21:04 2016
@@ -0,0 +1,17 @@
+-BEGIN PGP SIGNATURE-
+Version: GnuPG v2
+
+iQIcBAABCAAGBQJX6tPIAAoJEAyQQR0hoV3FN3AP/Ar1Q+UoEQv2CUp85EDe0F6f
+C0ARHen5V4Q08Dg0AONzMiY7CzCQzo8GxKQMPGi1Etcs4BSveC9IwmVLCPH1ZHCC
+dEl+Z9F7pqUqisVa7oRc+W+G6fiew4vBNRlsJjkHVjm2i6IjCQpBTBJpFpY2Rmty
+7STpj6yR4mzbxohIWZ3iwGZrVMIvCgAvB6epdNR+4xGT8z68MQk9nJlkO3p4UXTp
+xQX3AY7oW/OWEve9ggqZjjbSqucnr/hfbJvR7uaSz8D7hqF/LCyRqOXhZZBI6/P6
+BrSsf1lDaa7gJutg9goDsVMiaPBhZ49rnnLOEjTxJi/4N2CF6HCxTNo+uwMAHzbG
+R1D5cq1lw9TlEEXjvMzhd6BjLfipP9qNfkMaUCycFDY0qfxqWpipa7kBAfUzKSFN
+1DEHkDaTCzJ4EnvjTWN3CWAsYQmMtB8QRVV3kOLBw3MN9o7LAEby3nh0vOCdj49H
+hMGJKsptyueR0c6r051a4BlbgEXESlUlmDMFbMsQTBd2jBhedcz+TDJoxui+TLuF
+pdcNJ02btrknzanL/j+4atTmESTGfC7If/9QqSH0KGr7k28PJXBPI4nLNpdx+jbT
+JREaOVV+WV5hpUHBG9HaeQBoMy+9v06T5v3zd/+EdZ87iGXk8oYmlDuOzTvT49Cp
+gWJjw1JbUtBBrcmZuNXU
+=CoVB
+-END PGP SIGNATURE-

Added: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.md5
==
--- dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.md5 (added)
+++ dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.md5 Tue Sep 27 20:21:04 2016
@@ -0,0 +1 @@
+hbase-1.1.7-bin.tar.gz: C7 8E BB BF AB 48 91 56  FE FD 89 F2 19 36 4B 73

Added: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.mds
==
--- dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.mds (added)
+++ dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.mds Tue Sep 27 20:21:04 2016
@@ -0,0 +1,17 @@
+hbase-1.1.7-bin.tar.gz:MD5 = C7 8E BB BF AB 48 91 56  FE FD 89 F2 19 36 4B
+ 73
+hbase-1.1.7-bin.tar.gz:   SHA1 = E4C1 672F E87D 32F3 8FAC  D006 00D1 00E0 AC6F
+ C3E2
+hbase-1.1.7-bin.tar.gz: RMD160 = CD35 2EDC 1F91 8871 1C36  FB0F E04C 0C87 A6F4
+ 2472
+hbase-1.1.7-bin.tar.gz: SHA224 = 16F67692 BFD4D495 B706B15F 608E9EA3 02EB6AFD
+ 94D16C26 639D15A8
+hbase-1.1.7-bin.tar.gz: SHA256 = 5B232E0B FF873FE3 2E56CECD FF8FE50F 10A75240
+ B7D6DD42 A34DDEE4 BEBCCEC2
+hbase-1.1.7-bin.tar.gz: SHA384 = 23AA12CF BCB8EB6B 97A03570 7AC67A75 D45C4FC2
+ 4A32DE64 F099F967 4819B041 587050EF 7FD78CE6
+ 9B74DF43 F2EF5F57
+hbase-1.1.7-bin.tar.gz: SHA512 = E6F0D62B C32D7414 37088951 D9A00280 931395FD
+ AE87BB5F 9568B880 FC513198 0BD42527 AFDA3327
+ 6C5A062D 254B80E2 A2E2124E 5C9ECF6C B5C20023
+ A9C1956B

Added: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.sha
==
--- dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.sha (added)
+++ dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-bin.tar.gz.sha Tue Sep 27 20:21:04 2016
@@ -0,0 +1,3 @@
+hbase-1.1.7-bin.tar.gz: E6F0D62B C32D7414 37088951 D9A00280 931395FD AE87BB5F
+9568B880 FC513198 0BD42527 AFDA3327 6C5A062D 254B80E2
+A2E2124E 5C9ECF6C B5C20023 A9C1956B

Added: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-src.tar.gz
==
Binary file - no diff available.

Propchange: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/hbase/hbase-1.1.7RC0/hbase-1.1.7-src.tar.gz.asc

svn commit: r15764 - /release/hbase/KEYS

2016-09-27 Thread misty
Author: misty
Date: Tue Sep 27 16:00:51 2016
New Revision: 15764

Log:
Trying to fix the keys file

Modified:
release/hbase/KEYS

Modified: release/hbase/KEYS
==
--- release/hbase/KEYS (original)
+++ release/hbase/KEYS Tue Sep 27 16:00:51 2016
@@ -805,15 +805,13 @@ Lk/HI8x8RtdoATBkyN9ne6GGioL4nLx2kp8C4Rd3
 Oe/B8rbuD+mWrQ1HIoiO6/A4T91CZQ9ezstoABYCo0DZAA==
 =Vuvp
 -END PGP PUBLIC KEY BLOCK-
-
 pub   4096R/21A15DC5 2016-09-26
-  Key fingerprint = C45A 63D3 8427 A76D BF89  FE58 0C90 411D 21A1 5DC5
 uid   [ultimate] Misty Stanley-Jones (Key for signing releases) 
<mi...@apache.org>
 sig 321A15DC5 2016-09-26  Misty Stanley-Jones (Key for signing 
releases) <mi...@apache.org>
 sub   4096R/98AE28C3 2016-09-26
 sig  21A15DC5 2016-09-26  Misty Stanley-Jones (Key for signing 
releases) <mi...@apache.org>
--BEGIN PGP PUBLIC KEY BLOCK-
 
+-BEGIN PGP PUBLIC KEY BLOCK-
 Version: GnuPG v2
 
 mQINBFfpRwwBEADpbcrGuH+njNTFN2Ib8iuQCUNYtiRSCNZBovpKuCkVn/0Bn/79




[2/3] hbase git commit: updating docs from master

2016-09-26 Thread misty
updating docs from master


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/20a8d63c
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/20a8d63c
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/20a8d63c

Branch: refs/heads/branch-1.1
Commit: 20a8d63cf8d6b547b1050d3ed1ac01033942756f
Parents: ffb2fe7
Author: Misty Stanley-Jones <mi...@docker.com>
Authored: Sat Sep 24 11:34:54 2016 -0700
Committer: Misty Stanley-Jones <mi...@docker.com>
Committed: Sat Sep 24 11:34:54 2016 -0700

--
 src/main/asciidoc/_chapters/configuration.adoc | 41 +++---
 src/main/asciidoc/_chapters/developer.adoc | 60 +
 src/main/asciidoc/_chapters/ops_mgt.adoc   |  4 +-
 src/main/asciidoc/_chapters/zookeeper.adoc |  4 +-
 4 files changed, 48 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/20a8d63c/src/main/asciidoc/_chapters/configuration.adoc
--
diff --git a/src/main/asciidoc/_chapters/configuration.adoc 
b/src/main/asciidoc/_chapters/configuration.adoc
index 8dc3e8a..4804332 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -100,6 +100,11 @@ This section lists required services and some required 
system configuration.
 |JDK 7
 |JDK 8
 
+|2.0
+|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
+|link:http://search-hadoop.com/m/YGbbsPxZ723m3as[Not Supported]
+|yes
+
 |1.3
 |link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
 |yes
@@ -379,8 +384,9 @@ See also 
<<casestudies.max.transfer.threads,casestudies.max.transfer.threads>> a
 === ZooKeeper Requirements
 
 ZooKeeper 3.4.x is required as of HBase 1.0.0.
-HBase makes use of the `multi` functionality that is only available since 
3.4.0 (The `useMulti` configuration option defaults to `true` in HBase 1.0.0).
-See link:https://issues.apache.org/jira/browse/HBASE-12241[HBASE-12241 (The 
crash of regionServer when taking deadserver's replication queue breaks 
replication)] and 
link:https://issues.apache.org/jira/browse/HBASE-6775[HBASE-6775 (Use ZK.multi 
when available for HBASE-6710 0.92/0.94 compatibility fix)] for background.
+HBase makes use of the `multi` functionality that is only available since 
Zookeeper 3.4.0. The `hbase.zookeeper.useMulti` configuration property defaults 
to `true` in HBase 1.0.0.
+Refer to link:https://issues.apache.org/jira/browse/HBASE-12241[HBASE-12241 
(The crash of regionServer when taking deadserver's replication queue breaks 
replication)] and 
link:https://issues.apache.org/jira/browse/HBASE-6775[HBASE-6775 (Use ZK.multi 
when available for HBASE-6710 0.92/0.94 compatibility fix)] for background.
+The property is deprecated and useMulti is always enabled in HBase 2.0.
 
 [[standalone_dist]]
 == HBase run modes: Standalone and Distributed
@@ -1090,37 +1096,6 @@ Only a subset of all configurations can currently be 
changed in the running serv
 Here is an incomplete list: `hbase.regionserver.thread.compaction.large`, 
`hbase.regionserver.thread.compaction.small`, 
`hbase.regionserver.thread.split`, `hbase.regionserver.thread.merge`, as well 
as compaction policy and configurations and adjustment to offpeak hours.
 For the full list consult the patch attached to  
link:https://issues.apache.org/jira/browse/HBASE-12147[HBASE-12147 Porting 
Online Config Change from 89-fb].
 
-[[amazon_s3_configuration]]
-== Using Amazon S3 Storage
-
-HBase is designed to be tightly coupled with HDFS, and testing of other 
filesystems
-has not been thorough.
-
-The following limitations have been reported:
-
-- RegionServers should be deployed in Amazon EC2 to mitigate latency and 
bandwidth
-limitations when accessing the filesystem, and RegionServers must remain 
available
-to preserve data locality.
-- S3 writes each inbound and outbound file to disk, which adds overhead to 
each operation.
-- The best performance is achieved when all clients and servers are in the 
Amazon
-cloud, rather than a heterogenous architecture.
-- You must be aware of the location of `hadoop.tmp.dir` so that the local 
`/tmp/`
-directory is not filled to capacity.
-- HBase has a different file usage pattern than MapReduce jobs and has been 
optimized for
-HDFS, rather than distant networked storage.
-- The `s3a://` protocol is strongly recommended. The `s3n://` and `s3://` 
protocols have serious
-limitations and do not use the Amazon AWS SDK. The `s3a://` protocol is 
supported
-for use with HBase if you use Hadoop 2.6.1 or higher with HBase 1.2 or higher. 
Hadoop
-2.6.0 is not supported with HBase at all.
-
-Configuration details for Amazon S3 and associated Amazon services such as EMR 
are
-out of the scope of the 

[3/3] hbase git commit: Update CHANGELOG for 1.1.7

2016-09-26 Thread misty
Update CHANGELOG for 1.1.7


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/cdc799e9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/cdc799e9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/cdc799e9

Branch: refs/heads/branch-1.1
Commit: cdc799e9b7bedb6b2b2a9a3123321e010c7a6eab
Parents: 20a8d63
Author: Misty Stanley-Jones <mi...@docker.com>
Authored: Sat Sep 24 13:20:07 2016 -0700
Committer: Misty Stanley-Jones <mi...@docker.com>
Committed: Sat Sep 24 13:20:07 2016 -0700

--
 CHANGES.txt | 245 +--
 1 file changed, 95 insertions(+), 150 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/cdc799e9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1476991..a8ccd25 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,91 +1,36 @@
 HBase Change Log
 
 
-Release Notes - HBase - Version 1.1.6 08/28/2016
-
+Release Notes - HBase - Version 1.1.7 09/26/2016
 ** Sub-task
-* [HBASE-15878] - Deprecate doBulkLoad(Path hfofDir, final HTable table)  
in branch-1 (even though its 'late')
-* [HBASE-16056] - Procedure v2 - fix master crash for FileNotFound
-* [HBASE-16189] - [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x 
servers
-* [HBASE-16194] - Should count in MSLAB chunk allocation into heap size 
change when adding duplicate cells
-* [HBASE-16195] - Should not add chunk into chunkQueue if not using chunk 
pool in HeapMemStoreLAB
-* [HBASE-16317] - revert all ESAPI changes
-* [HBASE-16318] - fail build if license isn't in whitelist
-* [HBASE-16321] - Ensure findbugs jsr305 jar isn't present
-* [HBASE-16452] - Procedure v2 - Make ProcedureWALPrettyPrinter extend Tool
-
-
-
-
-
-
+* [HBASE-16522] - Procedure v2 - Cache system user and avoid IOException
+* [HBASE-16518] - Remove old .arcconfig file
+* [HBASE-16340] - ensure no Xerces jars included
+* [HBASE-16260] - Audit dependencies for Category-X
 
 ** Bug
-* [HBASE-11625] - Reading datablock throws "Invalid HFile block magic" and 
can not switch to hdfs checksum 
-* [HBASE-15615] - Wrong sleep time when RegionServerCallable need retry
-* [HBASE-15635] - Mean age of Blocks in cache (seconds) on webUI should be 
greater than zero
-* [HBASE-15698] - Increment TimeRange not serialized to server
-* [HBASE-15801] - Upgrade checkstyle for all branches
-* [HBASE-15811] - Batch Get after batch Put does not fetch all Cells
-* [HBASE-15824] - LocalHBaseCluster gets bind exception in master info port
-* [HBASE-15850] - Localize the configuration change in testCheckTableLocks 
to reduce flakiness of TestHBaseFsck test suite 
-* [HBASE-15852] - Backport HBASE-15125 'HBaseFsck's adoptHdfsOrphan 
function creates region with wrong end key boundary' to Apache HBase 1.1
-* [HBASE-15856] - Cached Connection instances can wind up with addresses 
never resolved
-* [HBASE-15873] - ACL for snapshot restore / clone is not enforced
-* [HBASE-15880] - RpcClientImpl#tracedWriteRequest incorrectly closes 
HTrace span
-* [HBASE-15925] - compat-module maven variable not evaluated
-* [HBASE-15954] - REST server should log requests with TRACE instead of 
DEBUG
-* [HBASE-15955] - Disable action in CatalogJanitor#setEnabled should wait 
for active cleanup scan to finish
-* [HBASE-15957] - RpcClientImpl.close never ends in some circumstances
-* [HBASE-15975] - logic in 
TestHTableDescriptor#testAddCoprocessorWithSpecStr is wrong
-* [HBASE-15976] - RegionServerMetricsWrapperRunnable will be failure  when 
disable blockcache.
-* [HBASE-16012] - Major compaction can't work due to obsolete scanner read 
point in RegionServer
-* [HBASE-16016] - AssignmentManager#waitForAssignment could have 
unexpected negative deadline
-* [HBASE-16032] - Possible memory leak in StoreScanner
-* [HBASE-16093] - Splits failed before creating daughter regions leave 
meta inconsistent
-* [HBASE-16129] - check_compatibility.sh is broken when using Java API 
Compliance Checker v1.7
-* [HBASE-16132] - Scan does not return all the result when regionserver is 
busy
-* [HBASE-16135] - PeerClusterZnode under rs of removed peer may never be 
deleted
-* [HBASE-16144] - Replication queue's lock will live forever if RS 
acquiring the lock has died prematurely
-* [HBASE-16190] - IntegrationTestDDLMasterFailover failed with 
IllegalArgumentException: n must be positive
-* [HBASE-16201] - NPE in RpcServer causing intermittent UT failure of 
TestMasterReplication#testHFileCyclicReplication
-* [HBASE-16207] - can't restore snapshot without "Admin" permission
-* [HBASE-1623

[1/3] hbase git commit: bump version to 1.1.7

2016-09-26 Thread misty
Repository: hbase
Updated Branches:
  refs/heads/branch-1.1 9d424c20c -> cdc799e9b


bump version to 1.1.7


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ffb2fe76
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ffb2fe76
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ffb2fe76

Branch: refs/heads/branch-1.1
Commit: ffb2fe76ba4aa5f0402e041b18e9cfd1fa988444
Parents: 9d424c2
Author: Misty Stanley-Jones <mi...@docker.com>
Authored: Sat Sep 24 10:44:54 2016 -0700
Committer: Misty Stanley-Jones <mi...@docker.com>
Committed: Sat Sep 24 10:44:54 2016 -0700

--
 hbase-annotations/pom.xml| 2 +-
 hbase-assembly/pom.xml   | 2 +-
 hbase-checkstyle/pom.xml | 4 ++--
 hbase-client/pom.xml | 2 +-
 hbase-common/pom.xml | 2 +-
 hbase-examples/pom.xml   | 2 +-
 hbase-hadoop-compat/pom.xml  | 2 +-
 hbase-hadoop2-compat/pom.xml | 2 +-
 hbase-it/pom.xml | 2 +-
 hbase-prefix-tree/pom.xml| 2 +-
 hbase-procedure/pom.xml  | 2 +-
 hbase-protocol/pom.xml   | 2 +-
 hbase-resource-bundle/pom.xml| 2 +-
 hbase-rest/pom.xml   | 2 +-
 hbase-server/pom.xml | 2 +-
 hbase-shaded/hbase-shaded-client/pom.xml | 2 +-
 hbase-shaded/hbase-shaded-server/pom.xml | 2 +-
 hbase-shaded/pom.xml | 2 +-
 hbase-shell/pom.xml  | 2 +-
 hbase-testing-util/pom.xml   | 2 +-
 hbase-thrift/pom.xml | 2 +-
 pom.xml  | 2 +-
 22 files changed, 23 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ffb2fe76/hbase-annotations/pom.xml
--
diff --git a/hbase-annotations/pom.xml b/hbase-annotations/pom.xml
index 4188e86..d703299 100644
--- a/hbase-annotations/pom.xml
+++ b/hbase-annotations/pom.xml
@@ -23,7 +23,7 @@
   
 hbase
 org.apache.hbase
-1.1.7-SNAPSHOT
+1.1.7
 ..
   
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ffb2fe76/hbase-assembly/pom.xml
--
diff --git a/hbase-assembly/pom.xml b/hbase-assembly/pom.xml
index 9bd513e..b9fe23c 100644
--- a/hbase-assembly/pom.xml
+++ b/hbase-assembly/pom.xml
@@ -23,7 +23,7 @@
   
 hbase
 org.apache.hbase
-1.1.7-SNAPSHOT
+1.1.7
 ..
   
   hbase-assembly

http://git-wip-us.apache.org/repos/asf/hbase/blob/ffb2fe76/hbase-checkstyle/pom.xml
--
diff --git a/hbase-checkstyle/pom.xml b/hbase-checkstyle/pom.xml
index 7cb107d..3261ba2 100644
--- a/hbase-checkstyle/pom.xml
+++ b/hbase-checkstyle/pom.xml
@@ -24,14 +24,14 @@
 4.0.0
 org.apache.hbase
 hbase-checkstyle
-1.1.7-SNAPSHOT
+1.1.7
 Apache HBase - Checkstyle
 Module to hold Checkstyle properties for HBase.
 
   
 hbase
 org.apache.hbase
-1.1.7-SNAPSHOT
+1.1.7
 ..
   
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ffb2fe76/hbase-client/pom.xml
--
diff --git a/hbase-client/pom.xml b/hbase-client/pom.xml
index 0f48601..79bfec5 100644
--- a/hbase-client/pom.xml
+++ b/hbase-client/pom.xml
@@ -24,7 +24,7 @@
   
 hbase
 org.apache.hbase
-1.1.7-SNAPSHOT
+1.1.7
 ..
   
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ffb2fe76/hbase-common/pom.xml
--
diff --git a/hbase-common/pom.xml b/hbase-common/pom.xml
index 2151eda..0998c5e 100644
--- a/hbase-common/pom.xml
+++ b/hbase-common/pom.xml
@@ -23,7 +23,7 @@
   
 hbase
 org.apache.hbase
-1.1.7-SNAPSHOT
+1.1.7
 ..
   
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ffb2fe76/hbase-examples/pom.xml
--
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index 84616e6..bdf4075 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -23,7 +23,7 @@
   
 hbase
 org.apache.hbase
-1.1.7-SNAPSHOT
+1.1.7
 ..
   
   hbase-examples

http://git-wip-us.apache.org/repos/asf/hbase/blob/ffb2fe76/hbase-hadoop-compat/pom.xml
--
diff --git a/hbase-hadoop-compat/pom.xml b/hbase-hadoop-compat/pom.xml
index 2ce15a7..1e5ee11 100644
--- a/hbase-hadoop-compat/pom.xml
+++ b/hbase-hadoop-compat/pom.xml
@@ -23,7 +23,7 @@
 
 hbase
 org.apache.hbase
-1.1.7-SNAPSHOT
+1.1.7
 ..
 
 

svn commit: r15675 - /release/hbase/KEYS

2016-09-26 Thread misty
Author: misty
Date: Mon Sep 26 16:20:47 2016
New Revision: 15675

Log:
Added signing key for mi...@apache.org

Modified:
release/hbase/KEYS

Modified: release/hbase/KEYS
==
--- release/hbase/KEYS (original)
+++ release/hbase/KEYS Mon Sep 26 16:20:47 2016
@@ -805,3 +805,63 @@ Lk/HI8x8RtdoATBkyN9ne6GGioL4nLx2kp8C4Rd3
 Oe/B8rbuD+mWrQ1HIoiO6/A4T91CZQ9ezstoABYCo0DZAA==
 =Vuvp
 -END PGP PUBLIC KEY BLOCK-
+
+pub   4096R/21A15DC5 2016-09-26
+  Key fingerprint = C45A 63D3 8427 A76D BF89  FE58 0C90 411D 21A1 5DC5
+uid   [ultimate] Misty Stanley-Jones (Key for signing releases) 
<mi...@apache.org>
+sig 321A15DC5 2016-09-26  Misty Stanley-Jones (Key for signing 
releases) <mi...@apache.org>
+sub   4096R/98AE28C3 2016-09-26
+sig  21A15DC5 2016-09-26  Misty Stanley-Jones (Key for signing 
releases) <mi...@apache.org>
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+Version: GnuPG v2
+
+mQINBFfpRwwBEADpbcrGuH+njNTFN2Ib8iuQCUNYtiRSCNZBovpKuCkVn/0Bn/79
+XOrW8NxWBtd8bNbTopJfTXn+BI5COkx5FK3rIOup+5Mfoa8+uBZgv2pearMjn1Sy
+S/hpBVkFF8ojbROpZCW5F82zEIuVJlcNEAqsFXKlxa4NmgAdCospJBVM6J7ZFmqw
+5xEwz0cvuQmTDv0ly5cPsoNJNZmewt+qZ/NcB0YvTfc/03EH6GVj9ph8l8pANzDy
+QK4PXDGji4e9VNDEY/VAV3LJ8vvSGHTQVXZNPUNDm4jsUv2bpfvPmYNM9IaUKbeC
+WUCFQI670KKfALUryiE99OXbXMdDHhxh+G4e1XXvf1E9cQ6DHY08iA2yKhfs5Opi
+wLckLH5ODti0xjKz71biX8JxznO1glkkACcUPYo7bzbtwROEukBmOEb0cXKRSJt9
+kzfLgJ3L5m45geQq8pMCTIrGMFUUUeRW4US/49wWmFuH8X6DSYxQ4hu793pvJsq7
+O6Fyc4+TQYDbbINwAw625Siv1WSJcNyySlUjrUD9Ssoo4XJ5nJIFq/qmQzIQoLWi
+d0nMFwMJ5xrLrFVJgdeLfAke3QClYGUIDqH4iAPyvejzeUvrKCAqIS51v6MbMBto
+hwYnyiHwtyLlMEp5xW5nSpsXZEspdAaOHbE03+t3d3ZFhoPLC6feibeIbQARAQAB
+tEFNaXN0eSBTdGFubGV5LUpvbmVzIChLZXkgZm9yIHNpZ25pbmcgcmVsZWFzZXMp
+IDxtaXN0eUBhcGFjaGUub3JnPokCOQQTAQgAIwUCV+lHDAIbAwcLCQgHAwIBBhUI
+AgkKCwQWAgMBAh4BAheAAAoJEAyQQR0hoV3FgPkP/AmqfIGEUpOfYSUeZ+gA+rFk
+UTQTFAhBbYM3w97zoCxJS0GxlJS7Rprb+nEBI3pZJmoPnFHhRc0ztNN+YFMHZLNN
+JaQteNp4scmn2gxPpXTbr023lVY9tmne2Ei+UWK6+d9WwtPQUxexrY8soS5S9Rrf
+yoXpV7l/lAuzwEDzD4uMQ5cDufO854JjNnohdgrYT0paCPLKEAVY221Inh7a7SDX
+KrhpOEqPb9ng0l5Cc6wMi9UT/yY8Su/yTClSNGRxGyEOp+z03zYblNR+miIxWP/a
+l0Y5NK0vnizANM5j+B+gk5e5z8E9c+KCW+frikFaV73yIvxeK01frx4dZ/67bASJ
+QQY/o6J6NX7ow5+/nOegph5qK4pl9snyQCDxl3XcbAPfCPzwo6yz0ObQf7ojAftX
+dbam4y2FkG7AfOm6JdGyVRxn9Ek/oTUtw4XlYvBflf911nTfqgZw2jFykScTT1LZ
+ijG+qfeC70Dr2qMdUMVJ5V3RX8ALqLflUXJKJfuWMZaYn0sxAwfRr5qRZB8XOzND
+ubpYvP13CNgac5bpwo7+bzqq1olZl2HsZk8wAFBPUAAFyhevG4ajuEbc01fVkVgg
+PHVd0yBCdJ1bFrwLTAQcjYKEeqiH6Ges/loCwbFgj/cQLHkOKc8hxMpRu/Bndn8K
+xxN5iSEYgcPFAKJUa4GduQINBFfpRwwBEACm8v78zkVgiCU6CfPalIDhZ+TmNJvY
+8Vdp8X+gBd6P3SHVfGcdEmRD34mV87uarNyUcaL3Er/jySCqNi9qyA4v0PSd/z/I
+ZSBms09rzxJpM3tCV/2/8vPOW/bHrjkyRgMY2a3ekRghr5eM9skd3DnWoiQ/sRXY
+9B5623VsP6Ev+oaR0AR/VGfNHFl85WUpf2Ce17GWA2URSGYAw54hVGSxhvlE+YSt
+faTfvRvYSUoAeq7VtzFx4g/GZUHSbtm4ufDdCB5MhXZdmQ5phjZaRbFfqrKFv2Nb
+DChoOW70vk6N95PbMjI8JjAaJHH/M/i/Wj335/6Nk/vjUHxNuJP7iha5q2GIRw6R
+ZHFTaX+Rs6jVP4eLUPPVDvzAY8HlYZ6kgj6T0r1O5NfEuBVDFrvRLAPcoFMgmOAQ
++kkpAmtWQvjSONTgPRcCh4UXB4gXt7H/c1E6D6axf+/gJfGuFaHqblS/hbfS7x5A
+Ejn+9aqrniDf1GSM+jmQmBQJunSjQR4ZbcumkbpRuviG8WB/V4OxrlyQQZgnz8Ne
+ZLJKJ9qhJGthPV0ujVmtyOtB6D/b5f5Hm6JyUWqwKhWa3xitUk0u5GcRoO0jOO1R
+WVeYB6L35kT02C4vAYE67UyUX0OBBbhdVBoXejsAzmSTfEN8ERXcpd9eBldovc9E
+4crNo4x7i3jqfQARAQABiQIfBBgBCAAJBQJX6UcMAhsMAAoJEAyQQR0hoV3FETwP
++gNCTygfSpQAL3oh7r5MaSRDk++K6ffllnOEXE02u9rej3LHsw1dGWx3VjgJV3Gt
+JaVcmyevrXjUxEoXcthTvpV8oAqpMcqvcSJOWmLN55UNTlm6XII8N4+TILLDHMCD
+oEvRlYJ/UWGqE0icqmrapyQ7FdBcuor4wquXvpM/bMDBbhl7MM6CpINO/426R9mr
+1k/JXTNbE84N9T2h/ERAW/LLIqVOzIdEveNHh9RebVdOQIXBewLJ9/SxsvFL23I6
+XhOln5RJECachzTmAIvBUJjlLaylbyBHJZvsfmiEDOxeajxBRagED8K2L7WcXGbG
+5JseOgovGUrwonT4wRCriEoZQxRrkJhh+k88yRXNa4sbxbLEbnXAoLIAhZRgRXBQ
+5cY2Lyxgy7BO2gH7SCOI89VZ+gYuB2c46Imx/BbtUG2F2WUuHEj46w2AtiH8fr1j
+Ow2NCnnFduV9eptWof4mhM+zGMelUHWYl9bpr3poe/ijc6j2h00MzzY3j6BgDirY
+6pUMsegPorLU5UNqf5uC3m0UYMpiBZP9oZXEjlHyTVY1sWL/Z5H/8aLCEboBW61t
+4lzbve5CEOTavrYxNU1jve9wK8iZ9jh1c1chqXIHwvlyfNZ+YFlXClEA4Ddm8EE7
+zsw2TSlqnlvmH87wmIyZKQtrDGSJp8qqLx+cfFL3wp7I
+=3+z7
+-END PGP PUBLIC KEY BLOCK-




[04/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RSRpcServices.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RSRpcServices.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RSRpcServices.html
index 7e7c584..70f074e 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RSRpcServices.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RSRpcServices.html
@@ -167,13 +167,13 @@
 
 
 
-protected RSRpcServices
-HRegionServer.rpcServices
-
-
 private RSRpcServices
 AnnotationReadingPriorityFunction.rpcServices
 
+
+protected RSRpcServices
+HRegionServer.rpcServices
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Region.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Region.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Region.html
index d0846c3..75d6896 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Region.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/Region.html
@@ -412,30 +412,30 @@
 
 
 private Region
-RSRpcServices.RegionScannerHolder.r
+SplitTransactionImpl.DaughterOpener.r
 
 
 private Region
-SplitTransactionImpl.DaughterOpener.r
+RSRpcServices.RegionScannerHolder.r
 
 
 private Region
-MemStoreFlusher.FlushRegionEntry.region
-
-
-private Region
 RegionServerServices.PostOpenDeployContext.region
 
-
+
 (package private) Region
 RegionCoprocessorHost.region
 The region
 
 
-
+
 private Region
 RegionCoprocessorHost.RegionEnvironment.region
 
+
+private Region
+MemStoreFlusher.FlushRegionEntry.region
+
 
 
 
@@ -473,6 +473,11 @@
 
 
 Region
+RegionMergeTransactionImpl.execute(Serverserver,
+   RegionServerServicesservices)
+
+
+Region
 RegionMergeTransaction.execute(Serverserver,
RegionServerServicesservices)
 Deprecated.
@@ -480,12 +485,13 @@
 
 
 
-
+
 Region
-RegionMergeTransactionImpl.execute(Serverserver,
-   RegionServerServicesservices)
+RegionMergeTransactionImpl.execute(Serverserver,
+   RegionServerServicesservices,
+   Useruser)
 
-
+
 Region
 RegionMergeTransaction.execute(Serverserver,
RegionServerServicesservices,
@@ -493,12 +499,6 @@
 Run the transaction.
 
 
-
-Region
-RegionMergeTransactionImpl.execute(Serverserver,
-   RegionServerServicesservices,
-   Useruser)
-
 
 private Region
 MemStoreFlusher.getBiggestMemstoreOfRegionReplica(http://docs.oracle.com/javase/8/docs/api/java/util/SortedMap.html?is-external=true;
 title="class or interface in java.util">SortedMaphttp://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long,RegionregionsBySize,
@@ -512,14 +512,14 @@
 
 
 Region
-HRegionServer.getFromOnlineRegions(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 
java.lang">StringencodedRegionName)
-
-
-Region
 OnlineRegions.getFromOnlineRegions(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 
java.lang">StringencodedRegionName)
 Return Region instance.
 
 
+
+Region
+HRegionServer.getFromOnlineRegions(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in 
java.lang">StringencodedRegionName)
+
 
 Region
 HRegionServer.getOnlineRegion(byte[]regionName)
@@ -609,24 +609,24 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegion
-HRegionServer.getOnlineRegions()
-
-
-http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegion
 OnlineRegions.getOnlineRegions()
 Get all online regions in this RS.
 
 
+
+http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegion
+HRegionServer.getOnlineRegions()
+
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegion
-HRegionServer.getOnlineRegions(TableNametableName)
-Gets the online regions of the specified table.
+OnlineRegions.getOnlineRegions(TableNametableName)
+Get all online regions of a table in this RS.
 
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListRegion
-OnlineRegions.getOnlineRegions(TableNametableName)
-Get all online regions of a table in this RS.
+HRegionServer.getOnlineRegions(TableNametableName)
+Gets the online regions of the specified table.
 
 
 
@@ -666,14 +666,14 @@
 
 
 void

[41/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/index-all.html
--
diff --git a/devapidocs/index-all.html b/devapidocs/index-all.html
index 7524c9b..a067f67 100644
--- a/devapidocs/index-all.html
+++ b/devapidocs/index-all.html
@@ -51135,6 +51135,8 @@
 Indicates to the client whether this task is monitoring a 
currently active 
  RPC call.
 
+isRunnable()
 - Method in class org.apache.hadoop.hbase.procedure2.Procedure
+
 isRunning()
 - Method in class org.apache.hadoop.hbase.master.ClusterSchemaServiceImpl
 
 isRunning()
 - Method in class org.apache.hadoop.hbase.master.procedure.MasterProcedureEnv

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/KeyValue.Type.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/KeyValue.Type.html 
b/devapidocs/org/apache/hadoop/hbase/KeyValue.Type.html
index 0109fa2..e208b7f 100644
--- a/devapidocs/org/apache/hadoop/hbase/KeyValue.Type.html
+++ b/devapidocs/org/apache/hadoop/hbase/KeyValue.Type.html
@@ -358,7 +358,7 @@ the order they are declared.
 
 
 values
-public staticKeyValue.Type[]values()
+public staticKeyValue.Type[]values()
 Returns an array containing the constants of this enum 
type, in
 the order they are declared.  This method may be used to iterate
 over the constants as follows:
@@ -378,7 +378,7 @@ for (KeyValue.Type c : KeyValue.Type.values())
 
 
 valueOf
-public staticKeyValue.TypevalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
+public staticKeyValue.TypevalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 Returns the enum constant of this type with the specified 
name.
 The string must match exactly an identifier used to declare an
 enum constant in this type.  (Extraneous whitespace characters are 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/ProcedureState.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/ProcedureState.html 
b/devapidocs/org/apache/hadoop/hbase/ProcedureState.html
index cebeea3..f5718d6 100644
--- a/devapidocs/org/apache/hadoop/hbase/ProcedureState.html
+++ b/devapidocs/org/apache/hadoop/hbase/ProcedureState.html
@@ -283,7 +283,7 @@ the order they are declared.
 
 
 values
-public staticProcedureState[]values()
+public staticProcedureState[]values()
 Returns an array containing the constants of this enum 
type, in
 the order they are declared.  This method may be used to iterate
 over the constants as follows:
@@ -303,7 +303,7 @@ for (ProcedureState c : ProcedureState.values())
 
 
 valueOf
-public staticProcedureStatevalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
+public staticProcedureStatevalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 Returns the enum constant of this type with the specified 
name.
 The string must match exactly an identifier used to declare an
 enum constant in this type.  (Extraneous whitespace characters are

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/class-use/Abortable.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/Abortable.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/Abortable.html
index 3e5db12..39f3592 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/Abortable.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/Abortable.html
@@ -298,11 +298,11 @@
 
 
 private Abortable
-SimpleRpcScheduler.abortable
+RpcExecutor.abortable
 
 
 private Abortable
-RpcExecutor.abortable
+SimpleRpcScheduler.abortable
 
 
 
@@ -563,24 +563,24 @@
 
 
 RpcScheduler
-FifoRpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
-  PriorityFunctionpriority,
-  Abortableserver)
-
-
-RpcScheduler
 RpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
   PriorityFunctionpriority,
   Abortableserver)
 Constructs a RpcScheduler.
 
 
-
+
 RpcScheduler
 SimpleRpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
   PriorityFunctionpriority,
   Abortableserver)
 
+
+RpcScheduler
+FifoRpcSchedulerFactory.create(org.apache.hadoop.conf.Configurationconf,
+  PriorityFunctionpriority,
+  Abortableserver)
+
 
 
 
@@ -651,16 +651,16 @@
 ReplicationQueuesArguments.abort
 
 
-private Abortable
-ReplicationPeersZKImpl.abortable
+protected Abortable
+ReplicationStateZKBase.abortable
 
 
 protected Abortable
 

[49/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/client/Consistency.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/Consistency.html 
b/apidocs/org/apache/hadoop/hbase/client/Consistency.html
index 957225e..022e423 100644
--- a/apidocs/org/apache/hadoop/hbase/client/Consistency.html
+++ b/apidocs/org/apache/hadoop/hbase/client/Consistency.html
@@ -254,7 +254,7 @@ the order they are declared.
 
 
 values
-public staticConsistency[]values()
+public staticConsistency[]values()
 Returns an array containing the constants of this enum 
type, in
 the order they are declared.  This method may be used to iterate
 over the constants as follows:
@@ -274,7 +274,7 @@ for (Consistency c : Consistency.values())
 
 
 valueOf
-public staticConsistencyvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
+public staticConsistencyvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 Returns the enum constant of this type with the specified 
name.
 The string must match exactly an identifier used to declare an
 enum constant in this type.  (Extraneous whitespace characters are 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html
index 17fcf5e..06be9bb 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html
@@ -146,19 +146,19 @@ the order they are declared.
 
 
 
+Get
+Get.setConsistency(Consistencyconsistency)
+
+
 Scan
 Scan.setConsistency(Consistencyconsistency)
 
-
+
 Query
 Query.setConsistency(Consistencyconsistency)
 Sets the consistency level for this operation
 
 
-
-Get
-Get.setConsistency(Consistencyconsistency)
-
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
index 88d6806..52f0603 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html
@@ -207,15 +207,15 @@ the order they are declared.
 Append.setDurability(Durabilityd)
 
 
-Increment
-Increment.setDurability(Durabilityd)
-
-
 Mutation
 Mutation.setDurability(Durabilityd)
 Set the durability for this mutation
 
 
+
+Increment
+Increment.setDurability(Durabilityd)
+
 
 Delete
 Delete.setDurability(Durabilityd)

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html
index 7ca47ff..ce91f66 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html
@@ -139,19 +139,19 @@ the order they are declared.
 
 
 
+Get
+Get.setIsolationLevel(IsolationLevellevel)
+
+
 Scan
 Scan.setIsolationLevel(IsolationLevellevel)
 
-
+
 Query
 Query.setIsolationLevel(IsolationLevellevel)
 Set the isolation level for this query.
 
 
-
-Get
-Get.setIsolationLevel(IsolationLevellevel)
-
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/client/class-use/Result.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Result.html 
b/apidocs/org/apache/hadoop/hbase/client/class-use/Result.html
index 29ae4f1..a243e93 100644
--- a/apidocs/org/apache/hadoop/hbase/client/class-use/Result.html
+++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Result.html
@@ -298,17 +298,11 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
-MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
-   org.apache.hadoop.mapred.JobConfjob,
-   
org.apache.hadoop.mapred.Reporterreporter)
-
-
-org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
 TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,

[43/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/coc.html
--
diff --git a/coc.html b/coc.html
index 6e3d020..a7c00a6 100644
--- a/coc.html
+++ b/coc.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  
   Code of Conduct Policy
@@ -331,7 +331,7 @@ For flagrant violations requiring a firm response the PMC 
may opt to skip early
 http://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2016-09-15
+  Last Published: 
2016-09-16
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/cygwin.html
--
diff --git a/cygwin.html b/cygwin.html
index d041d6f..ba2c57d 100644
--- a/cygwin.html
+++ b/cygwin.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Installing Apache HBase (TM) on Windows using 
Cygwin
 
@@ -673,7 +673,7 @@ Now your HBase server is running, start 
coding and build that next
 http://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2016-09-15
+  Last Published: 
2016-09-16
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/dependencies.html
--
diff --git a/dependencies.html b/dependencies.html
index 231c16f..4bf08eb 100644
--- a/dependencies.html
+++ b/dependencies.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Project Dependencies
 
@@ -518,7 +518,7 @@
 http://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2016-09-15
+  Last Published: 
2016-09-16
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/dependency-convergence.html
--
diff --git a/dependency-convergence.html b/dependency-convergence.html
index 0d0dff1..1387601 100644
--- a/dependency-convergence.html
+++ b/dependency-convergence.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Reactor Dependency Convergence
 
@@ -1729,7 +1729,7 @@
 http://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2016-09-15
+  Last Published: 
2016-09-16
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/dependency-info.html
--
diff --git a/dependency-info.html b/dependency-info.html
index 976edcb..55992f1 100644
--- a/dependency-info.html
+++ b/dependency-info.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Dependency Information
 
@@ -312,7 +312,7 @@
 http://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2016-09-15
+  Last Published: 
2016-09-16
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/dependency-management.html
--
diff --git a/dependency-management.html b/dependency-management.html
index aec93c5..8e5cf6e 100644
--- a/dependency-management.html
+++ b/dependency-management.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Project Dependency Management
 
@@ -816,7 +816,7 @@
 http://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2016-09-15
+  Last Published: 
2016-09-16
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/constant-values.html
--
diff --git a/devapidocs/constant-values.html b/devapidocs/constant-values.html
index 69fa2a7..fe0c8a0 100644
--- a/devapidocs/constant-values.html
+++ b/devapidocs/constant-values.html
@@ -3662,28 +3662,28 @@
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
 date
-"Thu Sep 15 14:33:49 UTC 2016"
+"Fri Sep 16 14:35:56 UTC 2016"
 
 
 
 
 publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface 

[32/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/class-use/Server.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/Server.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/Server.html
index 6f82fe2..4b95eae 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/Server.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/Server.html
@@ -332,17 +332,17 @@
 ActiveMasterManager.master
 
 
-protected Server
-BulkAssigner.server
-
-
 private Server
 CatalogJanitor.server
 
-
+
 private Server
 SplitLogManager.server
 
+
+protected Server
+BulkAssigner.server
+
 
 
 
@@ -439,19 +439,19 @@
 
 
 private Server
-SplitTransactionImpl.server
+HeapMemoryManager.server
 
 
 private Server
-SplitTransactionImpl.DaughterOpener.server
+SplitTransactionImpl.server
 
 
 private Server
-LogRoller.server
+SplitTransactionImpl.DaughterOpener.server
 
 
 private Server
-HeapMemoryManager.server
+LogRoller.server
 
 
 
@@ -464,21 +464,21 @@
 
 
 Server
-RegionMergeTransaction.getServer()
-Get the Server running the transaction or rollback
-
+RegionMergeTransactionImpl.getServer()
 
 
 Server
-RegionMergeTransactionImpl.getServer()
+SplitTransactionImpl.getServer()
 
 
 Server
-SplitTransactionImpl.getServer()
+SplitTransaction.getServer()
+Get the Server running the transaction or rollback
+
 
 
 Server
-SplitTransaction.getServer()
+RegionMergeTransaction.getServer()
 Get the Server running the transaction or rollback
 
 
@@ -516,24 +516,15 @@
 
 
 Region
-RegionMergeTransaction.execute(Serverserver,
-   RegionServerServicesservices)
-Deprecated.
-use #execute(Server, 
RegionServerServices, User)
-
-
-
-
-Region
 RegionMergeTransactionImpl.execute(Serverserver,
RegionServerServicesservices)
 
-
+
 PairOfSameTypeRegion
 SplitTransactionImpl.execute(Serverserver,
RegionServerServicesservices)
 
-
+
 PairOfSameTypeRegion
 SplitTransaction.execute(Serverserver,
RegionServerServicesservices)
@@ -542,27 +533,28 @@
 
 
 
-
+
 Region
-RegionMergeTransaction.execute(Serverserver,
-   RegionServerServicesservices,
-   Useruser)
-Run the transaction.
+RegionMergeTransaction.execute(Serverserver,
+   RegionServerServicesservices)
+Deprecated.
+use #execute(Server, 
RegionServerServices, User)
+
 
 
-
+
 Region
 RegionMergeTransactionImpl.execute(Serverserver,
RegionServerServicesservices,
Useruser)
 
-
+
 PairOfSameTypeRegion
 SplitTransactionImpl.execute(Serverserver,
RegionServerServicesservices,
Useruser)
 
-
+
 PairOfSameTypeRegion
 SplitTransaction.execute(Serverserver,
RegionServerServicesservices,
@@ -570,6 +562,14 @@
 Run the transaction.
 
 
+
+Region
+RegionMergeTransaction.execute(Serverserver,
+   RegionServerServicesservices,
+   Useruser)
+Run the transaction.
+
+
 
 void
 ReplicationService.initialize(Serverrs,
@@ -605,24 +605,15 @@
 
 
 boolean
-RegionMergeTransaction.rollback(Serverserver,
-RegionServerServicesservices)
-Deprecated.
-use #rollback(Server, 
RegionServerServices, User)
-
-
-
-
-boolean
 RegionMergeTransactionImpl.rollback(Serverserver,
 RegionServerServicesservices)
 
-
+
 boolean
 SplitTransactionImpl.rollback(Serverserver,
 RegionServerServicesservices)
 
-
+
 boolean
 SplitTransaction.rollback(Serverserver,
 RegionServerServicesservices)
@@ -631,27 +622,28 @@
 
 
 
-
+
 boolean
-RegionMergeTransaction.rollback(Serverserver,
-RegionServerServicesservices,
-Useruser)
-Roll back a failed transaction
+RegionMergeTransaction.rollback(Serverserver,
+RegionServerServicesservices)
+Deprecated.
+use #rollback(Server, 
RegionServerServices, User)
+
 
 
-
+
 boolean
 RegionMergeTransactionImpl.rollback(Serverserver,
 RegionServerServicesservices,
 Useruser)
 
-
+
 boolean
 SplitTransactionImpl.rollback(Serverserver,
 RegionServerServicesservices,
 Useruser)
 
-
+
 boolean
 SplitTransaction.rollback(Serverserver,
 RegionServerServicesservices,
@@ -659,6 +651,14 @@
 Roll back a failed transaction
 
 
+
+boolean
+RegionMergeTransaction.rollback(Serverserver,
+RegionServerServicesservices,
+Useruser)
+Roll back a failed transaction
+
+
 
 void
 RegionMergeTransactionImpl.stepsAfterPONR(Serverserver,

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html
index 10b1dad..389cfd5 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/ServerName.html
@@ -256,11 +256,11 @@
 
 
 ServerName
-Server.getServerName()
+SplitLogTask.getServerName()
 
 

[46/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html 
b/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
index ef54c71..d4666b5 100644
--- a/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
+++ b/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html
@@ -124,100 +124,100 @@
 
 
 
-T
-DataType.decode(PositionedByteRangesrc)
-Read an instance of T from the buffer 
src.
-
+byte[]
+OrderedBlob.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Number.html?is-external=true;
 title="class or interface in java.lang">Number
-OrderedNumeric.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
+OrderedInt32.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
-RawLong.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+RawString.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true;
 title="class or interface in java.lang">Short
-RawShort.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
+OrderedInt64.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true;
 title="class or interface in java.lang">Object[]
-Struct.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true;
 title="class or interface in java.lang">Double
+RawDouble.decode(PositionedByteRangesrc)
 
 
-T
-FixedLengthWrapper.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
+RawInteger.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true;
 title="class or interface in java.lang">Byte
-RawByte.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true;
 title="class or interface in java.lang">Double
+OrderedFloat64.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
-RawString.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true;
 title="class or interface in java.lang">Float
+RawFloat.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true;
 title="class or interface in java.lang">Byte
-OrderedInt8.decode(PositionedByteRangesrc)
+T
+FixedLengthWrapper.decode(PositionedByteRangesrc)
 
 
-byte[]
-RawBytes.decode(PositionedByteRangesrc)
+http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
+OrderedString.decode(PositionedByteRangesrc)
 
 
+http://docs.oracle.com/javase/8/docs/api/java/lang/Number.html?is-external=true;
 title="class or interface in java.lang">Number
+OrderedNumeric.decode(PositionedByteRangesrc)
+
+
 T
 TerminatedWrapper.decode(PositionedByteRangesrc)
 
+
+http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true;
 title="class or interface in java.lang">Float
+OrderedFloat32.decode(PositionedByteRangesrc)
+
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">String
-OrderedString.decode(PositionedByteRangesrc)
+byte[]
+RawBytes.decode(PositionedByteRangesrc)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true;
 title="class or interface in java.lang">Long
-OrderedInt64.decode(PositionedByteRangesrc)
+RawLong.decode(PositionedByteRangesrc)
 
 
 http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true;
 title="class or interface in java.lang">Short
-OrderedInt16.decode(PositionedByteRangesrc)
+RawShort.decode(PositionedByteRangesrc)
 
 
 byte[]
 OrderedBlobVar.decode(PositionedByteRangesrc)
 
 
-http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
-RawInteger.decode(PositionedByteRangesrc)
-
-
-http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true;
 title="class or interface in java.lang">Integer
-OrderedInt32.decode(PositionedByteRangesrc)

[48/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html 
b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
index 82cc963..9d8a596 100644
--- a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
+++ b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
@@ -156,19 +156,19 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
+Get
+Get.setFilter(Filterfilter)
+
+
 Scan
 Scan.setFilter(Filterfilter)
 
-
+
 Query
 Query.setFilter(Filterfilter)
 Apply the specified server-side filter when performing the 
Query.
 
 
-
-Get
-Get.setFilter(Filterfilter)
-
 
 
 
@@ -390,83 +390,83 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 static Filter
-DependentColumnFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ColumnPrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-ColumnPrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+SingleColumnValueExcludeFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-InclusiveStopFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+PrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-SingleColumnValueFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ColumnCountGetFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-QualifierFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+FirstKeyOnlyFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-KeyOnlyFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+InclusiveStopFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-PrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+MultipleColumnPrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-MultipleColumnPrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ValueFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-PageFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
+ColumnPaginationFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 
java.util">ArrayListbyte[]filterArguments)
 
 
 static Filter
-ColumnCountGetFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true;
 title="class or interface in 

[37/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/class-use/HDFSBlocksDistribution.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/class-use/HDFSBlocksDistribution.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/HDFSBlocksDistribution.html
index bf7fff2..689b99d 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/HDFSBlocksDistribution.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/HDFSBlocksDistribution.html
@@ -304,11 +304,11 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 HDFSBlocksDistribution
-StoreFile.getHDFSBlockDistribution()
+StoreFileInfo.getHDFSBlockDistribution()
 
 
 HDFSBlocksDistribution
-StoreFileInfo.getHDFSBlockDistribution()
+StoreFile.getHDFSBlockDistribution()
 
 
 HDFSBlocksDistribution



[19/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html
index 4f412f2..7448db3 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockCacheKey.html
@@ -173,16 +173,16 @@
 
 
 void
-LruBlockCache.cacheBlock(BlockCacheKeycacheKey,
+BlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf)
-Cache the block with the specified name and buffer.
+Add block to cache (defaults to not in-memory).
 
 
 
 void
-BlockCache.cacheBlock(BlockCacheKeycacheKey,
+LruBlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf)
-Add block to cache (defaults to not in-memory).
+Cache the block with the specified name and buffer.
 
 
 
@@ -192,17 +192,19 @@
 
 
 void
-InclusiveCombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
+CombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf,
   booleaninMemory,
   booleancacheDataInL1)
 
 
 void
-CombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
+BlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf,
   booleaninMemory,
-  booleancacheDataInL1)
+  booleancacheDataInL1)
+Add block to cache.
+
 
 
 void
@@ -215,12 +217,10 @@
 
 
 void
-BlockCache.cacheBlock(BlockCacheKeycacheKey,
+InclusiveCombinedBlockCache.cacheBlock(BlockCacheKeycacheKey,
   Cacheablebuf,
   booleaninMemory,
-  booleancacheDataInL1)
-Add block to cache.
-
+  booleancacheDataInL1)
 
 
 void
@@ -241,31 +241,33 @@
 
 
 boolean
-LruBlockCache.evictBlock(BlockCacheKeycacheKey)
-
-
-boolean
 BlockCache.evictBlock(BlockCacheKeycacheKey)
 Evict block from cache.
 
 
+
+boolean
+LruBlockCache.evictBlock(BlockCacheKeycacheKey)
+
 
 boolean
 MemcachedBlockCache.evictBlock(BlockCacheKeycacheKey)
 
 
 Cacheable
-InclusiveCombinedBlockCache.getBlock(BlockCacheKeycacheKey,
+CombinedBlockCache.getBlock(BlockCacheKeycacheKey,
 booleancaching,
 booleanrepeat,
 booleanupdateCacheMetrics)
 
 
 Cacheable
-CombinedBlockCache.getBlock(BlockCacheKeycacheKey,
+BlockCache.getBlock(BlockCacheKeycacheKey,
 booleancaching,
 booleanrepeat,
-booleanupdateCacheMetrics)
+booleanupdateCacheMetrics)
+Fetch block from cache.
+
 
 
 Cacheable
@@ -278,12 +280,10 @@
 
 
 Cacheable
-BlockCache.getBlock(BlockCacheKeycacheKey,
+InclusiveCombinedBlockCache.getBlock(BlockCacheKeycacheKey,
 booleancaching,
 booleanrepeat,
-booleanupdateCacheMetrics)
-Fetch block from cache.
-
+booleanupdateCacheMetrics)
 
 
 Cacheable
@@ -315,17 +315,17 @@
 
 
 void
-LruBlockCache.returnBlock(BlockCacheKeycacheKey,
-   Cacheableblock)
-
-
-void
 BlockCache.returnBlock(BlockCacheKeycacheKey,
Cacheableblock)
 Called when the scanner using the block decides to return 
the block once its usage
  is over.
 
 
+
+void
+LruBlockCache.returnBlock(BlockCacheKeycacheKey,
+   Cacheableblock)
+
 
 void
 MemcachedBlockCache.returnBlock(BlockCacheKeycacheKey,

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockType.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockType.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockType.html
index 36104b4..4cd5b10 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockType.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/BlockType.html
@@ -194,25 +194,25 @@
 
 
 BlockType
-Cacheable.getBlockType()
+BlockCacheKey.getBlockType()
 
 
 BlockType
-BlockCacheKey.getBlockType()
+Cacheable.getBlockType()
 
 
 BlockType
-HFileBlock.getBlockType()
+CachedBlock.getBlockType()
 
 
 BlockType
-HFileBlock.BlockWritable.getBlockType()
-The type of block this data should use.
-
+HFileBlock.getBlockType()
 
 
 BlockType
-CachedBlock.getBlockType()
+HFileBlock.BlockWritable.getBlockType()
+The type of block this data should use.
+
 
 
 BlockType
@@ -220,14 +220,14 @@
 
 
 BlockType
-HFileBlockIndex.BlockIndexWriter.getInlineBlockType()
-
-
-BlockType
 InlineBlockWriter.getInlineBlockType()
 The type of blocks this block writer produces.
 
 
+
+BlockType
+HFileBlockIndex.BlockIndexWriter.getInlineBlockType()
+
 
 static BlockType
 BlockType.parse(byte[]buf,
@@ -284,26 +284,26 @@ the order they are declared.
 
 
 void
-NoOpDataBlockEncoder.endBlockEncoding(HFileBlockEncodingContextencodingCtx,
+HFileDataBlockEncoder.endBlockEncoding(HFileBlockEncodingContextencodingCtx,
 

[23/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html 
b/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
index 1a8bb34..e0db95e 100644
--- a/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
+++ b/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html
@@ -140,41 +140,39 @@
 
 
 Filter.ReturnCode
-DependentColumnFilter.filterKeyValue(Cellc)
+QualifierFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-PrefixFilter.filterKeyValue(Cellv)
+WhileMatchFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-FamilyFilter.filterKeyValue(Cellv)
+RandomRowFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-WhileMatchFilter.filterKeyValue(Cellv)
+ColumnCountGetFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-InclusiveStopFilter.filterKeyValue(Cellv)
+DependentColumnFilter.filterKeyValue(Cellc)
 
 
 Filter.ReturnCode
-FirstKeyOnlyFilter.filterKeyValue(Cellv)
+KeyOnlyFilter.filterKeyValue(Cellignored)
 
 
 Filter.ReturnCode
-TimestampsFilter.filterKeyValue(Cellv)
+FuzzyRowFilter.filterKeyValue(Cellc)
 
 
-abstract Filter.ReturnCode
-Filter.filterKeyValue(Cellv)
-A way to filter based on the column family, column 
qualifier and/or the column value.
-
+Filter.ReturnCode
+SingleColumnValueFilter.filterKeyValue(Cellc)
 
 
 Filter.ReturnCode
-KeyOnlyFilter.filterKeyValue(Cellignored)
+FamilyFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
@@ -182,69 +180,71 @@
 
 
 Filter.ReturnCode
-QualifierFilter.filterKeyValue(Cellv)
+FilterList.filterKeyValue(Cellc)
 
 
 Filter.ReturnCode
-SkipFilter.filterKeyValue(Cellv)
+ColumnPrefixFilter.filterKeyValue(Cellcell)
 
 
 Filter.ReturnCode
-ColumnCountGetFilter.filterKeyValue(Cellv)
+ColumnRangeFilter.filterKeyValue(Cellkv)
 
 
 Filter.ReturnCode
-RandomRowFilter.filterKeyValue(Cellv)
+PrefixFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-FuzzyRowFilter.filterKeyValue(Cellc)
+ValueFilter.filterKeyValue(Cellv)
 
 
-Filter.ReturnCode
-SingleColumnValueFilter.filterKeyValue(Cellc)
+abstract Filter.ReturnCode
+Filter.filterKeyValue(Cellv)
+A way to filter based on the column family, column 
qualifier and/or the column value.
+
 
 
 Filter.ReturnCode
-FilterList.filterKeyValue(Cellc)
+MultiRowRangeFilter.filterKeyValue(Cellignored)
 
 
 Filter.ReturnCode
-ColumnRangeFilter.filterKeyValue(Cellkv)
+FirstKeyOnlyFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-MultiRowRangeFilter.filterKeyValue(Cellignored)
-
-
-Filter.ReturnCode
 FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cellv)
 Deprecated.
 
 
+
+Filter.ReturnCode
+PageFilter.filterKeyValue(Cellignored)
+
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterKeyValue(Cellcell)
+TimestampsFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-PageFilter.filterKeyValue(Cellignored)
+ColumnPaginationFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-RowFilter.filterKeyValue(Cellv)
+SkipFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-ValueFilter.filterKeyValue(Cellv)
+RowFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-FilterWrapper.filterKeyValue(Cellv)
+InclusiveStopFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-ColumnPaginationFilter.filterKeyValue(Cellv)
+FilterWrapper.filterKeyValue(Cellv)
 
 
 static Filter.ReturnCode
@@ -311,11 +311,11 @@ the order they are declared.
 
 
 Filter.ReturnCode
-VisibilityLabelFilter.filterKeyValue(Cellcell)
+VisibilityController.DeleteVersionVisibilityExpressionFilter.filterKeyValue(Cellcell)
 
 
 Filter.ReturnCode
-VisibilityController.DeleteVersionVisibilityExpressionFilter.filterKeyValue(Cellcell)
+VisibilityLabelFilter.filterKeyValue(Cellcell)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html 
b/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
index 56f15fe..f9ed74a 100644
--- a/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
+++ b/devapidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html
@@ -170,11 +170,11 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 Filter
-Scan.getFilter()
+Query.getFilter()
 
 
 Filter
-Query.getFilter()
+Scan.getFilter()
 
 
 
@@ -186,19 +186,19 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-Scan
-Scan.setFilter(Filterfilter)
-
-
 Get
 Get.setFilter(Filterfilter)
 
-
+
 Query
 Query.setFilter(Filterfilter)
 Apply the specified server-side filter when performing the 
Query.
 
 
+
+Scan
+Scan.setFilter(Filterfilter)
+
 
 
 
@@ -464,69 +464,69 @@ Input/OutputFormats, a table indexing MapReduce job, and 

[51/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/fd52f877
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/fd52f877
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/fd52f877

Branch: refs/heads/asf-site
Commit: fd52f8775d129a65843404f2e19c837e66884a23
Parents: e947fdc
Author: jenkins <bui...@apache.org>
Authored: Fri Sep 16 14:57:47 2016 +
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Fri Sep 16 10:24:52 2016 -0700

--
 acid-semantics.html |4 +-
 apache_hbase_reference_guide.pdf|4 +-
 apache_hbase_reference_guide.pdfmarks   |4 +-
 .../apache/hadoop/hbase/KeepDeletedCells.html   |4 +-
 .../org/apache/hadoop/hbase/class-use/Cell.html |  232 +-
 .../hadoop/hbase/class-use/ServerName.html  |4 +-
 .../hadoop/hbase/class-use/TableName.html   |8 +-
 .../apache/hadoop/hbase/client/Consistency.html |4 +-
 .../hbase/client/class-use/Consistency.html |   10 +-
 .../hbase/client/class-use/Durability.html  |8 +-
 .../hbase/client/class-use/IsolationLevel.html  |   10 +-
 .../hadoop/hbase/client/class-use/Result.html   |   48 +-
 .../hadoop/hbase/client/class-use/Row.html  |6 +-
 .../hadoop/hbase/client/package-tree.html   |8 +-
 .../filter/class-use/Filter.ReturnCode.html |   60 +-
 .../hadoop/hbase/filter/class-use/Filter.html   |   50 +-
 .../hadoop/hbase/filter/package-tree.html   |6 +-
 .../io/class-use/ImmutableBytesWritable.html|   70 +-
 .../hadoop/hbase/io/class-use/TimeRange.html|   12 +-
 .../hbase/io/crypto/class-use/Cipher.html   |   16 +-
 .../hbase/io/encoding/DataBlockEncoding.html|4 +-
 .../apache/hadoop/hbase/quotas/QuotaType.html   |4 +-
 .../hbase/quotas/ThrottlingException.Type.html  |4 +-
 .../hadoop/hbase/quotas/package-tree.html   |2 +-
 .../hadoop/hbase/regionserver/BloomType.html|4 +-
 .../hadoop/hbase/util/class-use/Bytes.html  |   16 +-
 .../hadoop/hbase/util/class-use/Order.html  |   42 +-
 .../hadoop/hbase/util/class-use/Pair.html   |4 +-
 .../util/class-use/PositionedByteRange.html |  380 +-
 apidocs/overview-tree.html  |   18 +-
 book.html   |2 +-
 bulk-loads.html |4 +-
 checkstyle-aggregate.html   |  230 +-
 checkstyle.rss  | 6850 +-
 coc.html|4 +-
 cygwin.html |4 +-
 dependencies.html   |4 +-
 dependency-convergence.html |4 +-
 dependency-info.html|4 +-
 dependency-management.html  |4 +-
 devapidocs/constant-values.html |8 +-
 devapidocs/deprecated-list.html |  244 +-
 devapidocs/index-all.html   |2 +
 .../org/apache/hadoop/hbase/KeyValue.Type.html  |4 +-
 .../org/apache/hadoop/hbase/ProcedureState.html |4 +-
 .../hadoop/hbase/class-use/Abortable.html   |   34 +-
 .../org/apache/hadoop/hbase/class-use/Cell.html | 1022 +--
 .../hadoop/hbase/class-use/CellComparator.html  |  120 +-
 .../hadoop/hbase/class-use/CellScanner.html |   56 +-
 .../hadoop/hbase/class-use/ClusterStatus.html   |   20 +-
 .../hadoop/hbase/class-use/Coprocessor.html |   12 +-
 .../hbase/class-use/CoprocessorEnvironment.html |   58 +-
 .../hbase/class-use/HBaseIOException.html   |8 +-
 .../hbase/class-use/HColumnDescriptor.html  |  190 +-
 .../hbase/class-use/HDFSBlocksDistribution.html |4 +-
 .../hadoop/hbase/class-use/HRegionInfo.html |  326 +-
 .../hadoop/hbase/class-use/HRegionLocation.html |   16 +-
 .../hbase/class-use/HTableDescriptor.html   |  300 +-
 .../InterProcessLock.MetadataHandler.html   |8 +-
 .../apache/hadoop/hbase/class-use/KeyValue.html |   52 +-
 .../hbase/class-use/NamespaceDescriptor.html|  116 +-
 .../hadoop/hbase/class-use/ProcedureInfo.html   |   20 +-
 .../hadoop/hbase/class-use/RegionLocations.html |   24 +-
 .../apache/hadoop/hbase/class-use/Server.html   |  106 +-
 .../hadoop/hbase/class-use/ServerName.html  |  150 +-
 .../hbase/class-use/TableDescriptors.html   |4 +-
 .../hadoop/hbase/class-use/TableName.html   | 1162 +--
 .../class-use/TableNotDisabledException.html|8 +-
 .../hbase/class-use/TableNotFoundException.html |8 +-
 .../hbase/classification/package-tree.html  |6 +-
 .../hbase/classification/package-use.html   |   10 +
 .../client/AbstractResponse.ResponseType.html   |4 +-
 .../hadoop/hbase/client/AsyncProcess.Retry

[03/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RegionServerServices.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RegionServerServices.html
 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RegionServerServices.html
index dfb702f..99aae4e 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RegionServerServices.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/RegionServerServices.html
@@ -234,15 +234,15 @@
 
 
 
+void
+RegionServerProcedureManagerHost.initialize(RegionServerServicesrss)
+
+
 abstract void
 RegionServerProcedureManager.initialize(RegionServerServicesrss)
 Initialize a globally barriered procedure for region 
servers.
 
 
-
-void
-RegionServerProcedureManagerHost.initialize(RegionServerServicesrss)
-
 
 
 
@@ -351,16 +351,6 @@
 CompactedHFilesDischarger.regionServerServices
 
 
-(package private) RegionServerServices
-RegionCoprocessorHost.rsServices
-The region server services
-
-
-
-private RegionServerServices
-RegionCoprocessorHost.RegionEnvironment.rsServices
-
-
 private RegionServerServices
 RegionMergeTransactionImpl.rsServices
 
@@ -369,12 +359,22 @@
 SplitTransactionImpl.rsServices
 
 
+(package private) RegionServerServices
+HRegion.rsServices
+
+
 private RegionServerServices
 RegionServerCoprocessorHost.rsServices
 
-
+
 (package private) RegionServerServices
-HRegion.rsServices
+RegionCoprocessorHost.rsServices
+The region server services
+
+
+
+private RegionServerServices
+RegionCoprocessorHost.RegionEnvironment.rsServices
 
 
 private RegionServerServices
@@ -395,25 +395,23 @@
 
 
 RegionServerServices
-RegionMergeTransaction.getRegionServerServices()
-Get the RegonServerServices of the server running the 
transaction or rollback
-
+RegionMergeTransactionImpl.getRegionServerServices()
 
 
 RegionServerServices
-RegionCoprocessorHost.RegionEnvironment.getRegionServerServices()
+SplitTransactionImpl.getRegionServerServices()
 
 
-RegionServerServices
-RegionMergeTransactionImpl.getRegionServerServices()
+(package private) RegionServerServices
+HRegion.getRegionServerServices()
 
 
 RegionServerServices
-SplitTransactionImpl.getRegionServerServices()
+RegionServerCoprocessorHost.RegionServerEnvironment.getRegionServerServices()
 
 
 RegionServerServices
-RegionServerCoprocessorHost.RegionServerEnvironment.getRegionServerServices()
+RegionCoprocessorHost.RegionEnvironment.getRegionServerServices()
 
 
 RegionServerServices
@@ -422,8 +420,10 @@
 
 
 
-(package private) RegionServerServices
-HRegion.getRegionServerServices()
+RegionServerServices
+RegionMergeTransaction.getRegionServerServices()
+Get the RegonServerServices of the server running the 
transaction or rollback
+
 
 
 
@@ -461,24 +461,15 @@
 
 
 Region
-RegionMergeTransaction.execute(Serverserver,
-   RegionServerServicesservices)
-Deprecated.
-use #execute(Server, 
RegionServerServices, User)
-
-
-
-
-Region
 RegionMergeTransactionImpl.execute(Serverserver,
RegionServerServicesservices)
 
-
+
 PairOfSameTypeRegion
 SplitTransactionImpl.execute(Serverserver,
RegionServerServicesservices)
 
-
+
 PairOfSameTypeRegion
 SplitTransaction.execute(Serverserver,
RegionServerServicesservices)
@@ -487,27 +478,28 @@
 
 
 
-
+
 Region
-RegionMergeTransaction.execute(Serverserver,
-   RegionServerServicesservices,
-   Useruser)
-Run the transaction.
+RegionMergeTransaction.execute(Serverserver,
+   RegionServerServicesservices)
+Deprecated.
+use #execute(Server, 
RegionServerServices, User)
+
 
 
-
+
 Region
 RegionMergeTransactionImpl.execute(Serverserver,
RegionServerServicesservices,
Useruser)
 
-
+
 PairOfSameTypeRegion
 SplitTransactionImpl.execute(Serverserver,
RegionServerServicesservices,
Useruser)
 
-
+
 PairOfSameTypeRegion
 SplitTransaction.execute(Serverserver,
RegionServerServicesservices,
@@ -515,6 +507,14 @@
 Run the transaction.
 
 
+
+Region
+RegionMergeTransaction.execute(Serverserver,
+   RegionServerServicesservices,
+   Useruser)
+Run the transaction.
+
+
 
 (package private) boolean
 RegionMergeTransactionImpl.hasMergeQualifierInMeta(RegionServerServicesservices,
@@ -604,34 +604,25 @@
 
 
 boolean
-RegionMergeTransaction.prepare(RegionServerServicesservices)
-Check merge inputs and prepare the transaction.
-
-
-
-boolean
 RegionMergeTransactionImpl.prepare(RegionServerServicesservices)
 
-
+
 boolean
-RegionMergeTransaction.rollback(Serverserver,
-RegionServerServicesservices)
-Deprecated.
-use #rollback(Server, 
RegionServerServices, User)
-
+RegionMergeTransaction.prepare(RegionServerServicesservices)
+Check merge inputs and prepare the transaction.
 
 
-
+
 boolean
 RegionMergeTransactionImpl.rollback(Serverserver,
 RegionServerServicesservices)
 
-
+
 boolean
 

[35/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html
index 0d31f2f..8eabb9c 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/HRegionLocation.html
@@ -274,11 +274,11 @@ service.
 
 
 protected HRegionLocation
-RegionAdminServiceCallable.location
+AbstractRegionServerCallable.location
 
 
 protected HRegionLocation
-AbstractRegionServerCallable.location
+RegionAdminServiceCallable.location
 
 
 
@@ -306,11 +306,11 @@ service.
 
 
 protected HRegionLocation
-MultiServerCallable.getLocation()
+AbstractRegionServerCallable.getLocation()
 
 
 protected HRegionLocation
-AbstractRegionServerCallable.getLocation()
+MultiServerCallable.getLocation()
 
 
 HRegionLocation
@@ -476,16 +476,16 @@ service.
 
 
 
-private void
-ConnectionImplementation.cacheLocation(TableNametableName,
+void
+MetaCache.cacheLocation(TableNametableName,
  ServerNamesource,
  HRegionLocationlocation)
 Put a newly discovered HRegionLocation into the cache.
 
 
 
-void
-MetaCache.cacheLocation(TableNametableName,
+private void
+ConnectionImplementation.cacheLocation(TableNametableName,
  ServerNamesource,
  HRegionLocationlocation)
 Put a newly discovered HRegionLocation into the cache.



[01/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site e947fdc25 -> e3ab1d1d0


http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html
index b39f53a..60b4e4f 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/class-use/StoreFile.html
@@ -607,31 +607,31 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true;
 title="class or interface in java.util">CollectionStoreFile
+DefaultStoreFileManager.clearCompactedFiles()
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true;
 title="class or interface in java.util">CollectionStoreFile
 StoreFileManager.clearCompactedFiles()
 Clears all the compacted files and returns them.
 
 
-
+
 com.google.common.collect.ImmutableCollectionStoreFile
 StripeStoreFileManager.clearCompactedFiles()
 
-
-http://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true;
 title="class or interface in java.util">CollectionStoreFile
-DefaultStoreFileManager.clearCompactedFiles()
-
 
 com.google.common.collect.ImmutableCollectionStoreFile
-StoreFileManager.clearFiles()
-Clears all the files currently in use and returns 
them.
-
+DefaultStoreFileManager.clearFiles()
 
 
 com.google.common.collect.ImmutableCollectionStoreFile
-StripeStoreFileManager.clearFiles()
+StoreFileManager.clearFiles()
+Clears all the files currently in use and returns 
them.
+
 
 
 com.google.common.collect.ImmutableCollectionStoreFile
-DefaultStoreFileManager.clearFiles()
+StripeStoreFileManager.clearFiles()
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true;
 title="class or interface in java.util">CollectionStoreFile
@@ -641,15 +641,15 @@
 
 
 
-com.google.common.collect.ImmutableCollectionStoreFile
-HStore.close()
-
-
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">Mapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListStoreFile
 HRegion.close()
 Close down this HRegion.
 
 
+
+com.google.common.collect.ImmutableCollectionStoreFile
+HStore.close()
+
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true;
 title="class or interface in java.util">Mapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListStoreFile
 HRegion.close(booleanabort)
@@ -667,18 +667,18 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListStoreFile
-HMobStore.compact(CompactionContextcompaction,
+Store.compact(CompactionContextcompaction,
ThroughputControllerthroughputController)
-The compaction in the store of mob.
+Deprecated.
+see 
compact(CompactionContext, ThroughputController, User)
+
 
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListStoreFile
-Store.compact(CompactionContextcompaction,
+HMobStore.compact(CompactionContextcompaction,
ThroughputControllerthroughputController)
-Deprecated.
-see 
compact(CompactionContext, ThroughputController, User)
-
+The compaction in the store of mob.
 
 
 
@@ -714,35 +714,35 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorStoreFile
+DefaultStoreFileManager.getCandidateFilesForRowKeyBefore(KeyValuetargetKey)
+
+
+http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorStoreFile
 StoreFileManager.getCandidateFilesForRowKeyBefore(KeyValuetargetKey)
 Gets initial, full list of candidate store files to check 
for row-key-before.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorStoreFile
 StripeStoreFileManager.getCandidateFilesForRowKeyBefore(KeyValuetargetKey)
 See StoreFileManager.getCandidateFilesForRowKeyBefore(KeyValue)
  for details on this methods.
 
 
-
-http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true;
 title="class or interface in java.util">IteratorStoreFile
-DefaultStoreFileManager.getCandidateFilesForRowKeyBefore(KeyValuetargetKey)
-
 
 http://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true;
 title="class or interface in java.util">CollectionStoreFile
+DefaultStoreFileManager.getCompactedfiles()
+
+

[26/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/MasterCoprocessorEnvironment.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/MasterCoprocessorEnvironment.html
 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/MasterCoprocessorEnvironment.html
index 4e68b24..3348cac 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/MasterCoprocessorEnvironment.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/MasterCoprocessorEnvironment.html
@@ -122,9 +122,7 @@
 
 
 void
-MasterObserver.postAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx)
-Called after a abortProcedure request has been 
processed.
-
+BaseMasterObserver.postAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx)
 
 
 void
@@ -132,17 +130,19 @@
 
 
 void
-BaseMasterObserver.postAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx)
+MasterObserver.postAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx)
+Called after a abortProcedure request has been 
processed.
+
 
 
 void
-MasterObserver.postAddColumn(ObserverContextMasterCoprocessorEnvironmentctx,
+BaseMasterObserver.postAddColumn(ObserverContextMasterCoprocessorEnvironmentctx,
  TableNametableName,
  HColumnDescriptorcolumnFamily)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0
  (https://issues.apache.org/jira/browse/HBASE-13645;>HBASE-13645).
- Use MasterObserver.postAddColumnFamily(ObserverContext,
 TableName, HColumnDescriptor).
+ Use BaseMasterObserver.postAddColumnFamily(ObserverContext,
 TableName, HColumnDescriptor).
 
 
 
@@ -156,23 +156,21 @@
 
 
 void
-BaseMasterObserver.postAddColumn(ObserverContextMasterCoprocessorEnvironmentctx,
+MasterObserver.postAddColumn(ObserverContextMasterCoprocessorEnvironmentctx,
  TableNametableName,
  HColumnDescriptorcolumnFamily)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0
  (https://issues.apache.org/jira/browse/HBASE-13645;>HBASE-13645).
- Use BaseMasterObserver.postAddColumnFamily(ObserverContext,
 TableName, HColumnDescriptor).
+ Use MasterObserver.postAddColumnFamily(ObserverContext,
 TableName, HColumnDescriptor).
 
 
 
 
 void
-MasterObserver.postAddColumnFamily(ObserverContextMasterCoprocessorEnvironmentctx,
+BaseMasterObserver.postAddColumnFamily(ObserverContextMasterCoprocessorEnvironmentctx,
TableNametableName,
-   HColumnDescriptorcolumnFamily)
-Called after the new column family has been created.
-
+   HColumnDescriptorcolumnFamily)
 
 
 void
@@ -182,19 +180,21 @@
 
 
 void
-BaseMasterObserver.postAddColumnFamily(ObserverContextMasterCoprocessorEnvironmentctx,
+MasterObserver.postAddColumnFamily(ObserverContextMasterCoprocessorEnvironmentctx,
TableNametableName,
-   HColumnDescriptorcolumnFamily)
+   HColumnDescriptorcolumnFamily)
+Called after the new column family has been created.
+
 
 
 void
-MasterObserver.postAddColumnHandler(ObserverContextMasterCoprocessorEnvironmentctx,
+BaseMasterObserver.postAddColumnHandler(ObserverContextMasterCoprocessorEnvironmentctx,
 TableNametableName,
 HColumnDescriptorcolumnFamily)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0
  (https://issues.apache.org/jira/browse/HBASE-13645;>HBASE-13645). Use
- MasterObserver.postCompletedAddColumnFamilyAction(ObserverContext,
 TableName, HColumnDescriptor).
+ BaseMasterObserver.postCompletedAddColumnFamilyAction(ObserverContext,
 TableName, HColumnDescriptor).
 
 
 
@@ -208,18 +208,28 @@
 
 
 void
-BaseMasterObserver.postAddColumnHandler(ObserverContextMasterCoprocessorEnvironmentctx,
+MasterObserver.postAddColumnHandler(ObserverContextMasterCoprocessorEnvironmentctx,
 TableNametableName,
 HColumnDescriptorcolumnFamily)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0
  (https://issues.apache.org/jira/browse/HBASE-13645;>HBASE-13645). Use
- BaseMasterObserver.postCompletedAddColumnFamilyAction(ObserverContext,
 TableName, HColumnDescriptor).
+ MasterObserver.postCompletedAddColumnFamilyAction(ObserverContext,
 TableName, HColumnDescriptor).
 
 
 
 
 void
+BaseMasterObserver.postAddRSGroup(ObserverContextMasterCoprocessorEnvironmentctx,
+  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
+
+
+void
+BaseMasterAndRegionObserver.postAddRSGroup(ObserverContextMasterCoprocessorEnvironmentctx,
+  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or 

[12/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.html 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.html
index ec76d3d..9ea04a8 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.html
@@ -400,7 +400,7 @@ extends Procedure
-addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 convert,
 convert,
 createProcedureInfo, doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId,
 getProcId, getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout,
 getTim
 eRemaining, hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 isFinished,
 isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString,
 toStringClass,
 toStringDetails,
 toStringSimpleSB,
 updateTimestamp,
 validateClass,
 wasExecuted
+addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 convert,
 convert,
 createProcedureInfo, doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId,
 getProcId, getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout,
 getTim
 eRemaining, hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 isFinished,
 isRunnable,
 isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString,
 toStringClass,
 toStringDetails, toStringSimpleSB,
 updateTimestamp,
 validateClass,
 wasExecuted
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/procedure/RestoreSnapshotProcedure.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/RestoreSnapshotProcedure.html
 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/RestoreSnapshotProcedure.html
index ce4b295..26598d2 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/RestoreSnapshotProcedure.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/RestoreSnapshotProcedure.html
@@ -388,7 +388,7 @@ extends Procedure
-addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert,
 createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId, getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 
 isFinished, isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes
 , setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString,
 toStringClass,
 toStringDetails,
 toStringSimpleSB,
 updateTimestamp,
 validateClass,
 wasExecuted
+addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert,
 createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId, getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 
 isFinished, isRunnable,
 isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch, setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString,
 toStringClass,
 toStringDetails,
 toStringSimpleSB,
 updateTimestamp,
 validateClass,
 wasExecuted
 
 
 


[28/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html 
b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html
index 45c6c97..0042448 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html
@@ -537,7 +537,7 @@ service.
 
 
 Result
-ClientAsyncPrefetchScanner.next()
+ClientSimpleScanner.next()
 
 
 Result
@@ -549,7 +549,7 @@ service.
 
 
 Result
-ClientSimpleScanner.next()
+ClientAsyncPrefetchScanner.next()
 
 
 Result
@@ -584,16 +584,16 @@ service.
 ClientAsyncPrefetchScanner.pollCache()
 
 
-protected Result
-RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.rpcCall()
+protected Result[]
+ScannerCallable.rpcCall()
 
 
-protected Result[]
-ClientSmallScanner.SmallScannerCallable.rpcCall()
+protected Result
+RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.rpcCall()
 
 
 protected Result[]
-ScannerCallable.rpcCall()
+ClientSmallScanner.SmallScannerCallable.rpcCall()
 
 
 
@@ -913,11 +913,11 @@ service.
 
 
 Result
-TableRecordReader.createValue()
+TableSnapshotInputFormat.TableSnapshotRecordReader.createValue()
 
 
 Result
-TableSnapshotInputFormat.TableSnapshotRecordReader.createValue()
+TableRecordReader.createValue()
 
 
 Result
@@ -934,11 +934,9 @@ service.
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
-TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
+MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
org.apache.hadoop.mapred.JobConfjob,
-   org.apache.hadoop.mapred.Reporterreporter)
-Builds a TableRecordReader.
-
+   
org.apache.hadoop.mapred.Reporterreporter)
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
@@ -948,9 +946,11 @@ service.
 
 
 org.apache.hadoop.mapred.RecordReaderImmutableBytesWritable,Result
-MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
+TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplitsplit,
org.apache.hadoop.mapred.JobConfjob,
-   
org.apache.hadoop.mapred.Reporterreporter)
+   org.apache.hadoop.mapred.Reporterreporter)
+Builds a TableRecordReader.
+
 
 
 
@@ -969,37 +969,37 @@ service.
 
 
 void
-GroupingTableMap.map(ImmutableBytesWritablekey,
+IdentityTableMap.map(ImmutableBytesWritablekey,
Resultvalue,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
org.apache.hadoop.mapred.Reporterreporter)
-Extract the grouping columns from value to construct a new 
key.
+Pass the key, value to reduce
 
 
 
 void
-RowCounter.RowCounterMapper.map(ImmutableBytesWritablerow,
-   Resultvalues,
+GroupingTableMap.map(ImmutableBytesWritablekey,
+   Resultvalue,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
-   org.apache.hadoop.mapred.Reporterreporter)
+   org.apache.hadoop.mapred.Reporterreporter)
+Extract the grouping columns from value to construct a new 
key.
+
 
 
 void
-IdentityTableMap.map(ImmutableBytesWritablekey,
-   Resultvalue,
+RowCounter.RowCounterMapper.map(ImmutableBytesWritablerow,
+   Resultvalues,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
-   org.apache.hadoop.mapred.Reporterreporter)
-Pass the key, value to reduce
-
+   org.apache.hadoop.mapred.Reporterreporter)
 
 
 boolean
-TableRecordReader.next(ImmutableBytesWritablekey,
+TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritablekey,
 Resultvalue)
 
 
 boolean
-TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritablekey,
+TableRecordReader.next(ImmutableBytesWritablekey,
 Resultvalue)
 
 
@@ -1018,28 +1018,28 @@ service.
 
 
 void
-GroupingTableMap.map(ImmutableBytesWritablekey,
+IdentityTableMap.map(ImmutableBytesWritablekey,
Resultvalue,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
org.apache.hadoop.mapred.Reporterreporter)
-Extract the grouping columns from value to construct a new 
key.
+Pass the key, value to reduce
 
 
 
 void
-RowCounter.RowCounterMapper.map(ImmutableBytesWritablerow,
-   Resultvalues,
+GroupingTableMap.map(ImmutableBytesWritablekey,
+   Resultvalue,
org.apache.hadoop.mapred.OutputCollectorImmutableBytesWritable,Resultoutput,
-   org.apache.hadoop.mapred.Reporterreporter)
+   org.apache.hadoop.mapred.Reporterreporter)
+Extract the grouping columns from value to construct a new 
key.
+
 
 
 void
-IdentityTableMap.map(ImmutableBytesWritablekey,
-   Resultvalue,
+RowCounter.RowCounterMapper.map(ImmutableBytesWritablerow,
+   Resultvalues,

[16/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/class-use/HMaster.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/master/class-use/HMaster.html 
b/devapidocs/org/apache/hadoop/hbase/master/class-use/HMaster.html
index 82ef568..8570fe6 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/class-use/HMaster.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/class-use/HMaster.html
@@ -204,35 +204,35 @@
 
 
 private HMaster
-MetricsMasterWrapperImpl.master
+MobCompactionChore.master
 
 
 private HMaster
-MasterRpcServices.master
+MetricsMasterWrapperImpl.master
 
 
 private HMaster
-ExpiredMobFileCleanerChore.master
+MasterMetaBootstrap.master
 
 
 private HMaster
-MasterMetaBootstrap.master
+MasterMobCompactionThread.master
 
 
 private HMaster
-MasterMobCompactionThread.master
+ExpiredMobFileCleanerChore.master
 
 
 private HMaster
-HMaster.InitializationMonitor.master
+MasterRpcServices.master
 
 
 private HMaster
-ClusterStatusPublisher.master
+HMaster.InitializationMonitor.master
 
 
 private HMaster
-MobCompactionChore.master
+ClusterStatusPublisher.master
 
 
 private HMaster
@@ -455,27 +455,27 @@
 
 
 private HMaster
-BackupMasterStatusTmpl.ImplData.m_master
+RegionServerListTmpl.ImplData.m_master
 
 
 private HMaster
-MasterStatusTmpl.ImplData.m_master
+BackupMasterStatusTmpl.ImplData.m_master
 
 
 private HMaster
-RegionServerListTmpl.ImplData.m_master
+MasterStatusTmpl.ImplData.m_master
 
 
 private HMaster
-RegionServerListTmplImpl.master
+BackupMasterStatusTmplImpl.master
 
 
 private HMaster
-MasterStatusTmplImpl.master
+RegionServerListTmplImpl.master
 
 
 private HMaster
-BackupMasterStatusTmplImpl.master
+MasterStatusTmplImpl.master
 
 
 
@@ -488,15 +488,15 @@
 
 
 HMaster
-BackupMasterStatusTmpl.ImplData.getMaster()
+RegionServerListTmpl.ImplData.getMaster()
 
 
 HMaster
-MasterStatusTmpl.ImplData.getMaster()
+BackupMasterStatusTmpl.ImplData.getMaster()
 
 
 HMaster
-RegionServerListTmpl.ImplData.getMaster()
+MasterStatusTmpl.ImplData.getMaster()
 
 
 
@@ -509,57 +509,57 @@
 
 
 org.jamon.Renderer
-BackupMasterStatusTmpl.makeRenderer(HMastermaster)
+RegionServerListTmpl.makeRenderer(HMastermaster)
 
 
 org.jamon.Renderer
-MasterStatusTmpl.makeRenderer(HMastermaster)
+BackupMasterStatusTmpl.makeRenderer(HMastermaster)
 
 
 org.jamon.Renderer
-RegionServerListTmpl.makeRenderer(HMastermaster)
+MasterStatusTmpl.makeRenderer(HMastermaster)
 
 
 void
-BackupMasterStatusTmpl.render(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
+RegionServerListTmpl.render(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
   HMastermaster)
 
 
 void
-MasterStatusTmpl.render(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
+BackupMasterStatusTmpl.render(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
   HMastermaster)
 
 
 void
-RegionServerListTmpl.render(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
+MasterStatusTmpl.render(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
   HMastermaster)
 
 
 void
-BackupMasterStatusTmpl.renderNoFlush(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
+RegionServerListTmpl.renderNoFlush(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
  HMastermaster)
 
 
 void
-MasterStatusTmpl.renderNoFlush(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
+BackupMasterStatusTmpl.renderNoFlush(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
  HMastermaster)
 
 
 void
-RegionServerListTmpl.renderNoFlush(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
+MasterStatusTmpl.renderNoFlush(http://docs.oracle.com/javase/8/docs/api/java/io/Writer.html?is-external=true;
 title="class or interface in java.io">WriterjamonWriter,
  HMastermaster)
 
 
 void
-BackupMasterStatusTmpl.ImplData.setMaster(HMastermaster)
+RegionServerListTmpl.ImplData.setMaster(HMastermaster)
 
 
 void
-MasterStatusTmpl.ImplData.setMaster(HMastermaster)

[14/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateNamespaceProcedure.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateNamespaceProcedure.html
 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateNamespaceProcedure.html
index 648188e..4928b91 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateNamespaceProcedure.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateNamespaceProcedure.html
@@ -370,7 +370,7 @@ extends Procedure
-addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert,
 createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId, getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 
 isFinished, isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes
 , setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 suspend,
 toString,
 toStringClass,
 toStringDetails,
 toStringSimpleSB, 
updateTimestamp,
 validateClass,
 wasExecuted
+addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert,
 createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId, getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 
 isFinished, isRunnable,
 isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch, setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 suspend,
 toString,
 toStringClass,
 toStringDetails, toStringSimpleSB,
 updateTimestamp,
 validateClass,
 wasExecuted
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateTableProcedure.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateTableProcedure.html 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateTableProcedure.html
index aea911a..e45faf3 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateTableProcedure.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/CreateTableProcedure.html
@@ -398,7 +398,7 @@ extends Procedure
-addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert,
 createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId, getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 
 isFinished, isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes
 , setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 suspend,
 toString,
 toStringClass,
 toStringDetails,
 toStringSimpleSB, 
updateTimestamp,
 validateClass,
 wasExecuted
+addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert,
 createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId, getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 
 isFinished, isRunnable,
 isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch, setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 suspend,
 toString,
 toStringClass,
 toStringDetails, toStringSimpleSB,
 updateTimestamp,
 validateClass,
 wasExecuted
 
 
 


[07/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.html 
b/devapidocs/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.html
index 4845d25..4035d4e 100644
--- a/devapidocs/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.html
+++ b/devapidocs/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.html
@@ -351,7 +351,7 @@ extends Procedure
-acquireLock,
 addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert, createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner, getParentProcId,
 getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 isFinished, isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 releaseLock,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch, setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString,
 toStringClass, toStringClassDetails,
 toStringDetails,
 toStringSimpleSB,
 updateTimestamp,
 validateClass,
 wasExecuted
+acquireLock,
 addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert, createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner, getParentProcId,
 getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 isFinished, isRunnable,
 isSuccess,
 isSuspended,
 isWaiting,
 newInstance,
 releaseLock,
 removeStackIndex,
 resume,
 setAbortFailure, setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult, setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString, toStringClass,
 toStringClassDetails,
 toStringDetails,
 toStringSimpleSB,
 updateTimestamp,
 validateClass,
 wasExecuted
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/procedure2/TwoPhaseProcedure.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/procedure2/TwoPhaseProcedure.html 
b/devapidocs/org/apache/hadoop/hbase/procedure2/TwoPhaseProcedure.html
index 4bb295e..0e01c5a 100644
--- a/devapidocs/org/apache/hadoop/hbase/procedure2/TwoPhaseProcedure.html
+++ b/devapidocs/org/apache/hadoop/hbase/procedure2/TwoPhaseProcedure.html
@@ -150,7 +150,7 @@ extends Procedure
-abort,
 acquireLock,
 addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert
 , convert,
 createProcedureInfo,
 deserializeStateData,
 doExecute,
 doRollback,
 elapsedTime,
 execute,
 getException, getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId,
 getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedur
 eId, getStackIndexes,
 getStartTime,
 getState,
 getTimeout,
 getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout, incChildrenLatch,
 isFailed,
 isFinished,
 isSuccess,
 isSuspended,
 isWaiting,
 isYieldAfterExecutionStep,
 newInstance,
 releaseLock,
 removeStackIndex,
 resume,
 rollback,
 serializeStateData,
 setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
 setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString,
 toStringClass,
 toStringClassDetails,
 toStringDetails, toStringSimpleSB,
 toStringState,
 updateTimestamp,
 validateClass,
 wasExecuted
+abort,
 acquireLock,
 addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert
 , convert,
 createProcedureInfo,
 deserializeStateData,
 doExecute,
 doRollback,
 elapsedTime,
 execute,
 getException, getLastUpdate,
 getNonceKey,
 getOwner,
 getParentProcId,
 getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedur
 eId, getStackIndexes,
 getStartTime,
 getState,
 getTimeout,
 getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout, incChildrenLatch,
 isFailed,
 isFinished,
 isRunnable,
 isSuccess,
 isSuspended,
 isWaiting,
 isYieldAfterExecutionStep,
 newInstance, 
releaseLock,
 removeStackIndex,
 resume,
 rollback,
 serializeStateData,
 setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 

[11/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
index db02a77..d11d321 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/procedure/class-use/MasterProcedureEnv.html
@@ -122,11 +122,9 @@
 
 
 void
-MasterObserver.preAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx,
+BaseMasterObserver.preAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx,
  ProcedureExecutorMasterProcedureEnvprocEnv,
- longprocId)
-Called before a abortProcedure request has been 
processed.
-
+ longprocId)
 
 
 void
@@ -136,9 +134,11 @@
 
 
 void
-BaseMasterObserver.preAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx,
+MasterObserver.preAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx,
  ProcedureExecutorMasterProcedureEnvprocEnv,
- longprocId)
+ longprocId)
+Called before a abortProcedure request has been 
processed.
+
 
 
 
@@ -169,11 +169,11 @@
 
 
 ProcedureExecutorMasterProcedureEnv
-MasterServices.getMasterProcedureExecutor()
+HMaster.getMasterProcedureExecutor()
 
 
 ProcedureExecutorMasterProcedureEnv
-HMaster.getMasterProcedureExecutor()
+MasterServices.getMasterProcedureExecutor()
 
 
 
@@ -205,27 +205,27 @@
 
 
 boolean
-DispatchMergingRegionsProcedure.abort(MasterProcedureEnvenv)
+RestoreSnapshotProcedure.abort(MasterProcedureEnvenv)
 
 
 boolean
-TruncateTableProcedure.abort(MasterProcedureEnvenv)
+DispatchMergingRegionsProcedure.abort(MasterProcedureEnvenv)
 
 
-boolean
-RestoreSnapshotProcedure.abort(MasterProcedureEnvenv)
-
-
 protected boolean
 ServerCrashProcedure.abort(MasterProcedureEnvenv)
 
+
+boolean
+TruncateTableProcedure.abort(MasterProcedureEnvenv)
+
 
 protected boolean
-CreateNamespaceProcedure.acquireLock(MasterProcedureEnvenv)
+DispatchMergingRegionsProcedure.acquireLock(MasterProcedureEnvenv)
 
 
 protected boolean
-CreateTableProcedure.acquireLock(MasterProcedureEnvenv)
+ServerCrashProcedure.acquireLock(MasterProcedureEnvenv)
 
 
 protected boolean
@@ -233,7 +233,7 @@
 
 
 protected boolean
-DispatchMergingRegionsProcedure.acquireLock(MasterProcedureEnvenv)
+CreateTableProcedure.acquireLock(MasterProcedureEnvenv)
 
 
 protected boolean
@@ -241,7 +241,7 @@
 
 
 protected boolean
-ServerCrashProcedure.acquireLock(MasterProcedureEnvenv)
+CreateNamespaceProcedure.acquireLock(MasterProcedureEnvenv)
 
 
 private void
@@ -305,23 +305,23 @@
 
 
 protected void
-DeleteColumnFamilyProcedure.completionCleanup(MasterProcedureEnvenv)
+AddColumnFamilyProcedure.completionCleanup(MasterProcedureEnvenv)
 
 
 protected void
-TruncateTableProcedure.completionCleanup(MasterProcedureEnvenv)
+ModifyTableProcedure.completionCleanup(MasterProcedureEnvenv)
 
 
 protected void
-ModifyColumnFamilyProcedure.completionCleanup(MasterProcedureEnvenv)
+TruncateTableProcedure.completionCleanup(MasterProcedureEnvenv)
 
 
 protected void
-ModifyTableProcedure.completionCleanup(MasterProcedureEnvenv)
+ModifyColumnFamilyProcedure.completionCleanup(MasterProcedureEnvenv)
 
 
 protected void
-AddColumnFamilyProcedure.completionCleanup(MasterProcedureEnvenv)
+DeleteColumnFamilyProcedure.completionCleanup(MasterProcedureEnvenv)
 
 
 protected static void
@@ -538,7 +538,7 @@
 
 
 private http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListHRegionInfo
-DeleteColumnFamilyProcedure.getRegionInfoList(MasterProcedureEnvenv)
+AddColumnFamilyProcedure.getRegionInfoList(MasterProcedureEnvenv)
 
 
 private http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListHRegionInfo
@@ -546,7 +546,7 @@
 
 
 private http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListHRegionInfo
-AddColumnFamilyProcedure.getRegionInfoList(MasterProcedureEnvenv)
+DeleteColumnFamilyProcedure.getRegionInfoList(MasterProcedureEnvenv)
 
 
 static http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListHRegionInfo
@@ -575,11 +575,11 @@
 
 
 private static TableNamespaceManager
-CreateNamespaceProcedure.getTableNamespaceManager(MasterProcedureEnvenv)
+DeleteNamespaceProcedure.getTableNamespaceManager(MasterProcedureEnvenv)
 
 
 private static TableNamespaceManager
-DeleteNamespaceProcedure.getTableNamespaceManager(MasterProcedureEnvenv)
+CreateNamespaceProcedure.getTableNamespaceManager(MasterProcedureEnvenv)
 
 
 

[45/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/overview-tree.html
--
diff --git a/apidocs/overview-tree.html b/apidocs/overview-tree.html
index de55f6d..27872c5 100644
--- a/apidocs/overview-tree.html
+++ b/apidocs/overview-tree.html
@@ -848,24 +848,24 @@
 org.apache.hadoop.hbase.KeepDeletedCells
 org.apache.hadoop.hbase.ProcedureState
 org.apache.hadoop.hbase.io.encoding.DataBlockEncoding
+org.apache.hadoop.hbase.filter.Filter.ReturnCode
 org.apache.hadoop.hbase.filter.BitComparator.BitwiseOp
-org.apache.hadoop.hbase.filter.CompareFilter.CompareOp
-org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType
 org.apache.hadoop.hbase.filter.FilterList.Operator
-org.apache.hadoop.hbase.filter.Filter.ReturnCode
-org.apache.hadoop.hbase.client.CompactType
+org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType
+org.apache.hadoop.hbase.filter.CompareFilter.CompareOp
+org.apache.hadoop.hbase.client.Durability
 org.apache.hadoop.hbase.client.IsolationLevel
-org.apache.hadoop.hbase.client.SnapshotType
-org.apache.hadoop.hbase.client.MasterSwitchType
 org.apache.hadoop.hbase.client.CompactionState
-org.apache.hadoop.hbase.client.Durability
+org.apache.hadoop.hbase.client.MasterSwitchType
 org.apache.hadoop.hbase.client.Consistency
+org.apache.hadoop.hbase.client.CompactType
+org.apache.hadoop.hbase.client.SnapshotType
 org.apache.hadoop.hbase.client.security.SecurityCapability
+org.apache.hadoop.hbase.regionserver.BloomType
 org.apache.hadoop.hbase.quotas.ThrottlingException.Type
-org.apache.hadoop.hbase.quotas.ThrottleType
 org.apache.hadoop.hbase.quotas.QuotaType
 org.apache.hadoop.hbase.quotas.QuotaScope
-org.apache.hadoop.hbase.regionserver.BloomType
+org.apache.hadoop.hbase.quotas.ThrottleType
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/book.html
--
diff --git a/book.html b/book.html
index 450800a..1f99bce 100644
--- a/book.html
+++ b/book.html
@@ -34050,7 +34050,7 @@ The server will return cellblocks compressed using this 
same compressor as long
 
 
 Version 2.0.0-SNAPSHOT
-Last updated 2016-07-30 14:32:09 +00:00
+Last updated 2016-07-24 14:31:11 +00:00
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/bulk-loads.html
--
diff --git a/bulk-loads.html b/bulk-loads.html
index aeb1515..0a4e0ff 100644
--- a/bulk-loads.html
+++ b/bulk-loads.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase   
   Bulk Loads in Apache HBase (TM)
@@ -305,7 +305,7 @@ under the License. -->
 http://www.apache.org/;>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2016-09-15
+  Last Published: 
2016-09-16
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/checkstyle-aggregate.html
--
diff --git a/checkstyle-aggregate.html b/checkstyle-aggregate.html
index 3d7e28d..63b1466 100644
--- a/checkstyle-aggregate.html
+++ b/checkstyle-aggregate.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase  Checkstyle Results
 
@@ -283,7 +283,7 @@
 1845
 0
 0
-11574
+11576
 
 Files
 
@@ -3226,7 +3226,7 @@
 org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java
 0
 0
-9
+10
 
 org/apache/hadoop/hbase/procedure2/RemoteProcedureException.java
 0
@@ -3281,7 +3281,7 @@
 org/apache/hadoop/hbase/procedure2/util/TimeoutBlockingQueue.java
 0
 0
-4
+5
 
 org/apache/hadoop/hbase/protobuf/ProtobufMagic.java
 0
@@ -6193,7 +6193,7 @@
 
 
 http://checkstyle.sourceforge.net/config_blocks.html#NeedBraces;>NeedBraces
-1672
+1674
 Error
 
 coding
@@ -6285,12 +6285,12 @@
 http://checkstyle.sourceforge.net/config_javadoc.html#JavadocTagContinuationIndentation;>JavadocTagContinuationIndentation
 
 offset: 2
-703
+702
 Error
 
 
 http://checkstyle.sourceforge.net/config_javadoc.html#NonEmptyAtclauseDescription;>NonEmptyAtclauseDescription
-3234
+3235
 Error
 
 misc
@@ -14198,7 +14198,7 @@
 
 Error
 javadoc
-JavadocTagContinuationIndentation
+NonEmptyAtclauseDescription
 Javadoc comment at column 64 has parse error. Missed HTML close tag 
'code'. Sometimes it means that close tag missed for one of previous tags.
 1883
 
@@ -29326,247 +29326,247 @@
 javadoc
 NonEmptyAtclauseDescription
 At-clause should have a non-empty description.
-835
+836
 
 Error
 javadoc
 NonEmptyAtclauseDescription
 At-clause should have a non-empty description.
-892
+893
 
 Error
 blocks
 NeedBraces
 'if' construct must use '{}'s.
-898
+899
 
 Error
 blocks
 NeedBraces
 'if' construct must use '{}'s.
-933
+934
 
 Error
 misc
 UpperEll
 Should use uppercase 'L'.
-938
+939
 
 Error
 javadoc
 NonEmptyAtclauseDescription
 At-clause 

[30/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html
index 1eec2af..7ae190c 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableNotDisabledException.html
@@ -104,13 +104,13 @@
 
 
 void
-MasterServices.checkTableModifiable(TableNametableName)
-Check table is modifiable; i.e.
-
+HMaster.checkTableModifiable(TableNametableName)
 
 
 void
-HMaster.checkTableModifiable(TableNametableName)
+MasterServices.checkTableModifiable(TableNametableName)
+Check table is modifiable; i.e.
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html 
b/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html
index 1395885..431b080 100644
--- a/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html
+++ b/devapidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html
@@ -178,13 +178,13 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 void
-MasterServices.checkTableModifiable(TableNametableName)
-Check table is modifiable; i.e.
-
+HMaster.checkTableModifiable(TableNametableName)
 
 
 void
-HMaster.checkTableModifiable(TableNametableName)
+MasterServices.checkTableModifiable(TableNametableName)
+Check table is modifiable; i.e.
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html
index 2e93774..6995dd7 100644
--- a/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html
@@ -88,12 +88,12 @@
 
 Annotation Type Hierarchy
 
-org.apache.hadoop.hbase.classification.InterfaceAudience.Private (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
 org.apache.hadoop.hbase.classification.InterfaceStability.Evolving (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
-org.apache.hadoop.hbase.classification.InterfaceAudience.Public (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
+org.apache.hadoop.hbase.classification.InterfaceStability.Stable (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
+org.apache.hadoop.hbase.classification.InterfaceAudience.Private (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
 org.apache.hadoop.hbase.classification.InterfaceAudience.LimitedPrivate (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
+org.apache.hadoop.hbase.classification.InterfaceAudience.Public (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
 org.apache.hadoop.hbase.classification.InterfaceStability.Unstable (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
-org.apache.hadoop.hbase.classification.InterfaceStability.Stable (implements 
java.lang.annotation.http://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html?is-external=true;
 title="class or interface in java.lang.annotation">Annotation)
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/classification/package-use.html

[20/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.EncodedScanner.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.EncodedScanner.html
 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.EncodedScanner.html
index e91bf54..0c3d1ac 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.EncodedScanner.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.EncodedScanner.html
@@ -122,7 +122,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-protected static class HFileReaderImpl.EncodedScanner
+protected static class HFileReaderImpl.EncodedScanner
 extends HFileReaderImpl.HFileScannerImpl
 Scanner that operates on encoded data blocks.
 
@@ -312,7 +312,7 @@ extends 
 
 decodingCtx
-private finalHFileBlockDecodingContext decodingCtx
+private finalHFileBlockDecodingContext decodingCtx
 
 
 
@@ -321,7 +321,7 @@ extends 
 
 seeker
-private finalDataBlockEncoder.EncodedSeeker seeker
+private finalDataBlockEncoder.EncodedSeeker seeker
 
 
 
@@ -330,7 +330,7 @@ extends 
 
 dataBlockEncoder
-private finalDataBlockEncoder 
dataBlockEncoder
+private finalDataBlockEncoder 
dataBlockEncoder
 
 
 
@@ -347,7 +347,7 @@ extends 
 
 EncodedScanner
-publicEncodedScanner(HFile.Readerreader,
+publicEncodedScanner(HFile.Readerreader,
   booleancacheBlocks,
   booleanpread,
   booleanisCompaction,
@@ -368,7 +368,7 @@ extends 
 
 isSeeked
-publicbooleanisSeeked()
+publicbooleanisSeeked()
 
 Specified by:
 isSeekedin
 interfaceHFileScanner
@@ -387,7 +387,7 @@ extends 
 
 setNonSeekedState
-publicvoidsetNonSeekedState()
+publicvoidsetNonSeekedState()
 
 Overrides:
 setNonSeekedStatein
 classHFileReaderImpl.HFileScannerImpl
@@ -400,7 +400,7 @@ extends 
 
 updateCurrentBlock
-protectedvoidupdateCurrentBlock(HFileBlocknewBlock)
+protectedvoidupdateCurrentBlock(HFileBlocknewBlock)
throws CorruptHFileException
 Updates the current block to be the given HFileBlock. 
Seeks to
  the the first key/value pair.
@@ -420,7 +420,7 @@ extends 
 
 getEncodedBuffer
-privateByteBuffgetEncodedBuffer(HFileBlocknewBlock)
+privateByteBuffgetEncodedBuffer(HFileBlocknewBlock)
 
 
 
@@ -429,7 +429,7 @@ extends 
 
 processFirstDataBlock
-protectedbooleanprocessFirstDataBlock()
+protectedbooleanprocessFirstDataBlock()
  throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 
 Overrides:
@@ -445,7 +445,7 @@ extends 
 
 next
-publicbooleannext()
+publicbooleannext()
  throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true;
 title="class or interface in java.io">IOException
 Description copied from 
class:HFileReaderImpl.HFileScannerImpl
 Go to the next key/value in the block section. Loads the 
next block if
@@ -469,7 +469,7 @@ extends 
 
 getKey
-publicCellgetKey()
+publicCellgetKey()
 Description copied from 
interface:HFileScanner
 Gets the current key in the form of a cell. You must call
  HFileScanner.seekTo(Cell)
 before this method.
@@ -489,7 +489,7 @@ extends 
 
 getValue
-publichttp://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffergetValue()
+publichttp://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html?is-external=true;
 title="class or interface in java.nio">ByteBuffergetValue()
 Description copied from 
interface:HFileScanner
 Gets a buffer view to the current value.  You must call
  HFileScanner.seekTo(Cell)
 before this method.
@@ -510,7 +510,7 @@ extends 
 
 getCell
-publicCellgetCell()
+publicCellgetCell()
 
 Specified by:
 getCellin
 interfaceHFileScanner
@@ -527,7 +527,7 @@ extends 
 
 getKeyString
-publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetKeyString()
+publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetKeyString()
 Description copied from 
interface:HFileScanner
 Convenience method to get a copy of the key as a string - 
interpreting the
  bytes as UTF8. You must call HFileScanner.seekTo(Cell)
 before this method.
@@ -547,7 +547,7 @@ extends 
 
 getValueString
-publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetValueString()
+publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringgetValueString()
 Description copied from 
interface:HFileScanner
 Convenience method to get a copy of the value as a string - 
interpreting
  the bytes as UTF8. You must 

[25/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html
index 485a55f..2213b1b 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/coprocessor/class-use/ObserverContext.html
@@ -209,9 +209,7 @@
 
 
 void
-MasterObserver.postAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx)
-Called after a abortProcedure request has been 
processed.
-
+BaseMasterObserver.postAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx)
 
 
 void
@@ -219,17 +217,19 @@
 
 
 void
-BaseMasterObserver.postAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx)
+MasterObserver.postAbortProcedure(ObserverContextMasterCoprocessorEnvironmentctx)
+Called after a abortProcedure request has been 
processed.
+
 
 
 void
-MasterObserver.postAddColumn(ObserverContextMasterCoprocessorEnvironmentctx,
+BaseMasterObserver.postAddColumn(ObserverContextMasterCoprocessorEnvironmentctx,
  TableNametableName,
  HColumnDescriptorcolumnFamily)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0
  (https://issues.apache.org/jira/browse/HBASE-13645;>HBASE-13645).
- Use MasterObserver.postAddColumnFamily(ObserverContext,
 TableName, HColumnDescriptor).
+ Use BaseMasterObserver.postAddColumnFamily(ObserverContext,
 TableName, HColumnDescriptor).
 
 
 
@@ -243,23 +243,21 @@
 
 
 void
-BaseMasterObserver.postAddColumn(ObserverContextMasterCoprocessorEnvironmentctx,
+MasterObserver.postAddColumn(ObserverContextMasterCoprocessorEnvironmentctx,
  TableNametableName,
  HColumnDescriptorcolumnFamily)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0
  (https://issues.apache.org/jira/browse/HBASE-13645;>HBASE-13645).
- Use BaseMasterObserver.postAddColumnFamily(ObserverContext,
 TableName, HColumnDescriptor).
+ Use MasterObserver.postAddColumnFamily(ObserverContext,
 TableName, HColumnDescriptor).
 
 
 
 
 void
-MasterObserver.postAddColumnFamily(ObserverContextMasterCoprocessorEnvironmentctx,
+BaseMasterObserver.postAddColumnFamily(ObserverContextMasterCoprocessorEnvironmentctx,
TableNametableName,
-   HColumnDescriptorcolumnFamily)
-Called after the new column family has been created.
-
+   HColumnDescriptorcolumnFamily)
 
 
 void
@@ -269,19 +267,21 @@
 
 
 void
-BaseMasterObserver.postAddColumnFamily(ObserverContextMasterCoprocessorEnvironmentctx,
+MasterObserver.postAddColumnFamily(ObserverContextMasterCoprocessorEnvironmentctx,
TableNametableName,
-   HColumnDescriptorcolumnFamily)
+   HColumnDescriptorcolumnFamily)
+Called after the new column family has been created.
+
 
 
 void
-MasterObserver.postAddColumnHandler(ObserverContextMasterCoprocessorEnvironmentctx,
+BaseMasterObserver.postAddColumnHandler(ObserverContextMasterCoprocessorEnvironmentctx,
 TableNametableName,
 HColumnDescriptorcolumnFamily)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0
  (https://issues.apache.org/jira/browse/HBASE-13645;>HBASE-13645). Use
- MasterObserver.postCompletedAddColumnFamilyAction(ObserverContext,
 TableName, HColumnDescriptor).
+ BaseMasterObserver.postCompletedAddColumnFamilyAction(ObserverContext,
 TableName, HColumnDescriptor).
 
 
 
@@ -295,22 +295,20 @@
 
 
 void
-BaseMasterObserver.postAddColumnHandler(ObserverContextMasterCoprocessorEnvironmentctx,
+MasterObserver.postAddColumnHandler(ObserverContextMasterCoprocessorEnvironmentctx,
 TableNametableName,
 HColumnDescriptorcolumnFamily)
 Deprecated.
 As of release 2.0.0, this 
will be removed in HBase 3.0.0
  (https://issues.apache.org/jira/browse/HBASE-13645;>HBASE-13645). Use
- BaseMasterObserver.postCompletedAddColumnFamilyAction(ObserverContext,
 TableName, HColumnDescriptor).
+ MasterObserver.postCompletedAddColumnFamilyAction(ObserverContext,
 TableName, HColumnDescriptor).
 
 
 
 
 void
-MasterObserver.postAddRSGroup(ObserverContextMasterCoprocessorEnvironmentctx,
-  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
-Called after a new region server group is added
-
+BaseMasterObserver.postAddRSGroup(ObserverContextMasterCoprocessorEnvironmentctx,
+  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">Stringname)
 
 

[08/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.CompletedProcedureCleaner.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.CompletedProcedureCleaner.html
 
b/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.CompletedProcedureCleaner.html
index f97ac73..429f39d 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.CompletedProcedureCleaner.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.CompletedProcedureCleaner.html
@@ -256,7 +256,7 @@ extends Procedure
-acquireLock,
 addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert, createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner, getParentProcId,
 getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 isFinished, isSuccess,
 isSuspended,
 isWaiting,
 isYieldAfterExecutionStep,
 newInstance,
 releaseLock,
 removeStackIndex,
 resume,
 setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 setProcId,
  setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString,
 toStringClass,
 toStringClassDetails,
 toStringDetails,
 toStringSimpleSB,
 toStringState,
 updateTimestamp,
 validateClass,
 wasExecuted
+acquireLock,
 addStackIndex,
 beforeReplay,
 childrenCountDown,
 compareTo,
 completionCleanup,
 convert,
 convert, createProcedureInfo,
 doExecute,
 doRollback,
 elapsedTime,
 getException,
 getLastUpdate,
 getNonceKey,
 getOwner, getParentProcId,
 getProcId,
 getProcIdHashCode,
 getResult,
 getRootProcedureId,
 getStackIndexes,
 getStartTime,
 getState,
 getTimeout, getTimeRemaining,
 hasChildren,
 hasException,
 hasOwner,
 hasParent,
 hasTimeout,
 incChildrenLatch,
 isFailed,
 isFinished, isRunnable,
 isSuccess,
 isSuspended,
 isWaiting,
 isYieldAfterExecutionStep,
 newInstance,
 releaseLock,
 removeStackIndex,
 resume, setAbortFailure,
 setChildrenLatch,
 setFailure,
 setFailure,
 setNonceKey,
 setOwner,
 setParentProcId,
 <
 a 
href="../../../../../org/apache/hadoop/hbase/procedure2/Procedure.html#setProcId-long-">setProcId,
 setResult,
 setStackIndexes,
 setStartTime,
 setState,
 setTimeout,
 setTimeoutFailure,
 shouldWaitClientAck,
 suspend,
 toString,
 toStringClass,
 toStringClassDetails,
 toStringDetails,
 toStringSimpleSB,
 toStringState,
 updateTimestamp,
 validateClass, 
wasExecuted
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.html 
b/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.html
index aaa2b78..50d3908 100644
--- a/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.html
+++ b/devapidocs/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.html
@@ -463,7 +463,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 registerListener(ProcedureExecutor.ProcedureExecutorListenerlistener)
 
 
-void
+boolean
 removeChore(ProcedureInMemoryChorechore)
 Remove a chore procedure from the executor
 
@@ -949,11 +949,13 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 removeChore
-publicvoidremoveChore(ProcedureInMemoryChorechore)
+publicbooleanremoveChore(ProcedureInMemoryChorechore)
 Remove a chore procedure from the executor
 
 Parameters:
 chore - the chore to remove
+Returns:
+whether the chore is removed, or it will be removed later
 
 
 
@@ -963,7 +965,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 submitProcedure
-publiclongsubmitProcedure(Procedureproc)
+publiclongsubmitProcedure(Procedureproc)
 Add a new root-procedure to the executor.
 
 Parameters:
@@ -979,7 +981,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 submitProcedure
-publiclongsubmitProcedure(Procedureproc,
+publiclongsubmitProcedure(Procedureproc,
 longnonceGroup,
 longnonce)
 Add a new root-procedure to the executor.
@@ -999,7 +1001,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 getResult
-publicProcedureInfogetResult(longprocId)
+publicProcedureInfogetResult(longprocId)
 
 
 
@@ -1008,7 +1010,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 isFinished
-publicbooleanisFinished(longprocId)
+publicbooleanisFinished(longprocId)
 

[52/52] hbase-site git commit: Empty commit

2016-09-16 Thread misty
Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/e3ab1d1d
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/e3ab1d1d
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/e3ab1d1d

Branch: refs/heads/asf-site
Commit: e3ab1d1d0d55c29ff50c62d06baa43e46b4b5ad0
Parents: fd52f87
Author: Misty Stanley-Jones <mi...@apache.org>
Authored: Fri Sep 16 10:25:11 2016 -0700
Committer: Misty Stanley-Jones <mi...@apache.org>
Committed: Fri Sep 16 10:25:11 2016 -0700

--

--




[50/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/class-use/Cell.html 
b/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
index 4f550e9..9ca4097 100644
--- a/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
+++ b/apidocs/org/apache/hadoop/hbase/class-use/Cell.html
@@ -1064,15 +1064,15 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 Append.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 
 
-Increment
-Increment.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
-
-
 Mutation
 Mutation.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
 Method for setting the put's familyMap
 
 
+
+Increment
+Increment.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
+
 
 Delete
 Delete.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true;
 title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListCellmap)
@@ -1092,8 +1092,11 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 
-Cell
-MultiRowRangeFilter.getNextCellHint(CellcurrentKV)
+abstract Cell
+Filter.getNextCellHint(CellcurrentCell)
+If the filter returns the match code SEEK_NEXT_USING_HINT, 
then it should also tell which is
+ the next key it must seek to.
+
 
 
 Cell
@@ -1101,40 +1104,39 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 
 Cell
-FuzzyRowFilter.getNextCellHint(CellcurrentCell)
+MultipleColumnPrefixFilter.getNextCellHint(Cellcell)
 
 
 Cell
-MultipleColumnPrefixFilter.getNextCellHint(Cellcell)
+FilterList.getNextCellHint(CellcurrentCell)
 
 
 Cell
-TimestampsFilter.getNextCellHint(CellcurrentCell)
-Pick the next cell that the scanner should seek to.
-
+ColumnPaginationFilter.getNextCellHint(Cellcell)
 
 
 Cell
-FilterList.getNextCellHint(CellcurrentCell)
+FuzzyRowFilter.getNextCellHint(CellcurrentCell)
 
 
 Cell
-ColumnPaginationFilter.getNextCellHint(Cellcell)
+ColumnRangeFilter.getNextCellHint(Cellcell)
 
 
 Cell
-ColumnRangeFilter.getNextCellHint(Cellcell)
+TimestampsFilter.getNextCellHint(CellcurrentCell)
+Pick the next cell that the scanner should seek to.
+
 
 
-abstract Cell
-Filter.getNextCellHint(CellcurrentCell)
-If the filter returns the match code SEEK_NEXT_USING_HINT, 
then it should also tell which is
- the next key it must seek to.
-
+Cell
+MultiRowRangeFilter.getNextCellHint(CellcurrentKV)
 
 
-Cell
-SkipFilter.transformCell(Cellv)
+abstract Cell
+Filter.transformCell(Cellv)
+Give the filter a chance to transform the passed 
KeyValue.
+
 
 
 Cell
@@ -1142,17 +1144,15 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 
 Cell
-KeyOnlyFilter.transformCell(Cellcell)
+FilterList.transformCell(Cellc)
 
 
 Cell
-FilterList.transformCell(Cellc)
+KeyOnlyFilter.transformCell(Cellcell)
 
 
-abstract Cell
-Filter.transformCell(Cellv)
-Give the filter a chance to transform the passed 
KeyValue.
-
+Cell
+SkipFilter.transformCell(Cellv)
 
 
 
@@ -1196,78 +1196,78 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 MultipleColumnPrefixFilter.filterColumn(Cellcell)
 
 
-Filter.ReturnCode
-MultiRowRangeFilter.filterKeyValue(Cellignored)
+abstract Filter.ReturnCode
+Filter.filterKeyValue(Cellv)
+A way to filter based on the column family, column 
qualifier and/or the column value.
+
 
 
 Filter.ReturnCode
-DependentColumnFilter.filterKeyValue(Cellc)
+ColumnPrefixFilter.filterKeyValue(Cellcell)
 
 
 Filter.ReturnCode
-RandomRowFilter.filterKeyValue(Cellv)
+WhileMatchFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-ColumnPrefixFilter.filterKeyValue(Cellcell)
+PrefixFilter.filterKeyValue(Cellv)
 
 
 Filter.ReturnCode
-SkipFilter.filterKeyValue(Cellv)

[47/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html 
b/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html
index b09a5a1..2f28411 100644
--- a/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html
+++ b/apidocs/org/apache/hadoop/hbase/util/class-use/Order.html
@@ -116,11 +116,11 @@
 
 
 protected Order
-RawBytes.order
+OrderedBytesBase.order
 
 
 protected Order
-OrderedBytesBase.order
+RawBytes.order
 
 
 
@@ -133,26 +133,23 @@
 
 
 Order
-DataType.getOrder()
-Retrieve the sort Order imposed by this data type, 
or null when
- natural ordering is not preserved.
-
+RawString.getOrder()
 
 
 Order
-Union3.getOrder()
+RawDouble.getOrder()
 
 
 Order
-RawLong.getOrder()
+RawInteger.getOrder()
 
 
 Order
-RawShort.getOrder()
+RawFloat.getOrder()
 
 
 Order
-Struct.getOrder()
+FixedLengthWrapper.getOrder()
 
 
 Order
@@ -160,47 +157,50 @@
 
 
 Order
-FixedLengthWrapper.getOrder()
+PBType.getOrder()
 
 
 Order
-RawByte.getOrder()
+OrderedBytesBase.getOrder()
 
 
 Order
-RawString.getOrder()
+TerminatedWrapper.getOrder()
 
 
 Order
-Union4.getOrder()
+RawBytes.getOrder()
 
 
 Order
-RawBytes.getOrder()
+RawLong.getOrder()
 
 
 Order
-TerminatedWrapper.getOrder()
+RawShort.getOrder()
 
 
 Order
-OrderedBytesBase.getOrder()
+RawByte.getOrder()
 
 
 Order
-RawInteger.getOrder()
+DataType.getOrder()
+Retrieve the sort Order imposed by this data type, 
or null when
+ natural ordering is not preserved.
+
 
 
 Order
-RawDouble.getOrder()
+Struct.getOrder()
 
 
 Order
-RawFloat.getOrder()
+Union3.getOrder()
 
 
 Order
-PBType.getOrder()
+Union4.getOrder()
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/apidocs/org/apache/hadoop/hbase/util/class-use/Pair.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/util/class-use/Pair.html 
b/apidocs/org/apache/hadoop/hbase/util/class-use/Pair.html
index acd3f29..cdc3915 100644
--- a/apidocs/org/apache/hadoop/hbase/util/class-use/Pair.html
+++ b/apidocs/org/apache/hadoop/hbase/util/class-use/Pair.html
@@ -177,11 +177,11 @@ Input/OutputFormats, a table indexing MapReduce job, and 
utility methods.
 
 
 protected Pairbyte[][],byte[][]
-TableInputFormatBase.getStartEndKeys()
+TableInputFormat.getStartEndKeys()
 
 
 protected Pairbyte[][],byte[][]
-TableInputFormat.getStartEndKeys()
+TableInputFormatBase.getStartEndKeys()
 
 
 



[15/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.PlanType.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.PlanType.html
 
b/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.PlanType.html
index f27058e..b4abfc5 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.PlanType.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.PlanType.html
@@ -104,19 +104,19 @@
 
 
 NormalizationPlan.PlanType
-MergeNormalizationPlan.getType()
+EmptyNormalizationPlan.getType()
 
 
 NormalizationPlan.PlanType
-SplitNormalizationPlan.getType()
+MergeNormalizationPlan.getType()
 
 
 NormalizationPlan.PlanType
-EmptyNormalizationPlan.getType()
+NormalizationPlan.getType()
 
 
 NormalizationPlan.PlanType
-NormalizationPlan.getType()
+SplitNormalizationPlan.getType()
 
 
 static NormalizationPlan.PlanType
@@ -142,25 +142,25 @@ the order they are declared.
 
 
 long
-RegionNormalizer.getSkippedCount(NormalizationPlan.PlanTypetype)
+SimpleRegionNormalizer.getSkippedCount(NormalizationPlan.PlanTypetype)
 
 
 long
-SimpleRegionNormalizer.getSkippedCount(NormalizationPlan.PlanTypetype)
+RegionNormalizer.getSkippedCount(NormalizationPlan.PlanTypetype)
 
 
 void
+SimpleRegionNormalizer.planSkipped(HRegionInfohri,
+   NormalizationPlan.PlanTypetype)
+
+
+void
 RegionNormalizer.planSkipped(HRegionInfohri,
NormalizationPlan.PlanTypetype)
 Notification for the case where plan couldn't be executed 
due to constraint violation, such as
  namespace quota
 
 
-
-void
-SimpleRegionNormalizer.planSkipped(HRegionInfohri,
-   NormalizationPlan.PlanTypetype)
-
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.html
 
b/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.html
index 29587af..d635250 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/NormalizationPlan.html
@@ -145,14 +145,14 @@
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListNormalizationPlan
-RegionNormalizer.computePlanForTable(TableNametable)
-Computes next optimal normalization plan.
+SimpleRegionNormalizer.computePlanForTable(TableNametable)
+Computes next most "urgent" normalization action on the 
table.
 
 
 
 http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true;
 title="class or interface in java.util">ListNormalizationPlan
-SimpleRegionNormalizer.computePlanForTable(TableNametable)
-Computes next most "urgent" normalization action on the 
table.
+RegionNormalizer.computePlanForTable(TableNametable)
+Computes next optimal normalization plan.
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/RegionNormalizer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/RegionNormalizer.html
 
b/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/RegionNormalizer.html
index 06b18cf..2e869cf 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/RegionNormalizer.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/master/normalizer/class-use/RegionNormalizer.html
@@ -121,11 +121,11 @@
 
 
 RegionNormalizer
-MasterServices.getRegionNormalizer()
+HMaster.getRegionNormalizer()
 
 
 RegionNormalizer
-HMaster.getRegionNormalizer()
+MasterServices.getRegionNormalizer()
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/master/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
index ac9db8d..f0e0121 100644
--- a/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/master/package-tree.html
@@ -331,11 +331,11 @@
 
 java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true;
 title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true;
 title="class or interface in java.lang">ComparableT, 

[18/52] [partial] hbase-site git commit: Published site at 2597217ae5aa057e1931c772139ce8cc7a2b3efb.

2016-09-16 Thread misty
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.FSReader.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.FSReader.html
 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.FSReader.html
index fdd48be..e28a95e 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.FSReader.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.FSReader.html
@@ -137,14 +137,14 @@
 
 
 HFileBlock.FSReader
-HFile.Reader.getUncachedBlockReader()
-
-
-HFileBlock.FSReader
 HFileReaderImpl.getUncachedBlockReader()
 For testing
 
 
+
+HFileBlock.FSReader
+HFile.Reader.getUncachedBlockReader()
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.html
index 894aa07..2c2f2b8 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlock.html
@@ -174,12 +174,12 @@
 
 
 HFileBlock
-HFile.Reader.getMetaBlock(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringmetaBlockName,
+HFileReaderImpl.getMetaBlock(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringmetaBlockName,
 booleancacheBlock)
 
 
 HFileBlock
-HFileReaderImpl.getMetaBlock(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringmetaBlockName,
+HFile.Reader.getMetaBlock(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true;
 title="class or interface in java.lang">StringmetaBlockName,
 booleancacheBlock)
 
 
@@ -197,27 +197,27 @@
 
 
 HFileBlock
-HFile.CachingBlockReader.readBlock(longoffset,
+HFileReaderImpl.readBlock(longdataBlockOffset,
  longonDiskBlockSize,
  booleancacheBlock,
  booleanpread,
  booleanisCompaction,
  booleanupdateCacheMetrics,
  BlockTypeexpectedBlockType,
- DataBlockEncodingexpectedDataBlockEncoding)
-Read in a file block.
-
+ DataBlockEncodingexpectedDataBlockEncoding)
 
 
 HFileBlock
-HFileReaderImpl.readBlock(longdataBlockOffset,
+HFile.CachingBlockReader.readBlock(longoffset,
  longonDiskBlockSize,
  booleancacheBlock,
  booleanpread,
  booleanisCompaction,
  booleanupdateCacheMetrics,
  BlockTypeexpectedBlockType,
- DataBlockEncodingexpectedDataBlockEncoding)
+ DataBlockEncodingexpectedDataBlockEncoding)
+Read in a file block.
+
 
 
 HFileBlock
@@ -358,13 +358,13 @@
 
 
 void
-HFile.CachingBlockReader.returnBlock(HFileBlockblock)
-Return the given block back to the cache, if it was 
obtained from cache.
-
+HFileReaderImpl.returnBlock(HFileBlockblock)
 
 
 void
-HFileReaderImpl.returnBlock(HFileBlockblock)
+HFile.CachingBlockReader.returnBlock(HFileBlockblock)
+Return the given block back to the cache, if it was 
obtained from cache.
+
 
 
 private void

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlockIndex.BlockIndexReader.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlockIndex.BlockIndexReader.html
 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlockIndex.BlockIndexReader.html
index 67a2658..20688e8 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlockIndex.BlockIndexReader.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileBlockIndex.BlockIndexReader.html
@@ -144,11 +144,11 @@
 
 
 HFileBlockIndex.BlockIndexReader
-HFile.Reader.getDataBlockIndexReader()
+HFileReaderImpl.getDataBlockIndexReader()
 
 
 HFileBlockIndex.BlockIndexReader
-HFileReaderImpl.getDataBlockIndexReader()
+HFile.Reader.getDataBlockIndexReader()
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fd52f877/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html 
b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html
index af68e84..f7a5ebf 100644
--- a/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html
+++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/class-use/HFileContext.html
@@ 

  1   2   3   4   5   6   7   8   9   10   >