Repository: hbase
Updated Branches:
  refs/heads/master 125f3eace -> 8e0571a3a


http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/performance.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/performance.adoc 
b/src/main/asciidoc/_chapters/performance.adoc
index f1d89b5..c917646 100644
--- a/src/main/asciidoc/_chapters/performance.adoc
+++ b/src/main/asciidoc/_chapters/performance.adoc
@@ -320,7 +320,7 @@ See also <<perf.compression.however>> for compression 
caveats.
 [[schema.regionsize]]
 === Table RegionSize
 
-The regionsize can be set on a per-table basis via `setFileSize` on 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor]
 in the event where certain tables require different regionsizes than the 
configured default regionsize.
+The regionsize can be set on a per-table basis via `setFileSize` on 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor]
 in the event where certain tables require different regionsizes than the 
configured default regionsize.
 
 See <<ops.capacity.regions>> for more information.
 
@@ -372,7 +372,7 @@ Bloom filters are enabled on a Column Family.
 You can do this by using the setBloomFilterType method of HColumnDescriptor or 
using the HBase API.
 Valid values are `NONE`, `ROW` (default), or `ROWCOL`.
 See <<bloom.filters.when>> for more information on `ROW` versus `ROWCOL`.
-See also the API documentation for 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
+See also the API documentation for 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
 
 The following example creates a table and enables a ROWCOL Bloom filter on the 
`colfam1` column family.
 
@@ -431,7 +431,7 @@ The blocksize can be configured for each ColumnFamily in a 
table, and defaults t
 Larger cell values require larger blocksizes.
 There is an inverse relationship between blocksize and the resulting StoreFile 
indexes (i.e., if the blocksize is doubled then the resulting indexes should be 
roughly halved).
 
-See 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor]
 and <<store>>for more information.
+See 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor]
 and <<store>>for more information.
 
 [[cf.in.memory]]
 === In-Memory ColumnFamilies
@@ -440,7 +440,7 @@ ColumnFamilies can optionally be defined as in-memory.
 Data is still persisted to disk, just like any other ColumnFamily.
 In-memory blocks have the highest priority in the <<block.cache>>, but it is 
not a guarantee that the entire table will be in memory.
 
-See 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor]
 for more information.
+See 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor]
 for more information.
 
 [[perf.compression]]
 === Compression
@@ -549,7 +549,7 @@ If deferred log flush is used, WAL edits are kept in memory 
until the flush peri
 The benefit is aggregated and asynchronous `WAL`- writes, but the potential 
downside is that if the RegionServer goes down the yet-to-be-flushed edits are 
lost.
 This is safer, however, than not using WAL at all with Puts.
 
-Deferred log flush can be configured on tables via 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor].
+Deferred log flush can be configured on tables via 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor].
 The default value of `hbase.regionserver.optionallogflushinterval` is 1000ms.
 
 [[perf.hbase.client.putwal]]
@@ -574,7 +574,7 @@ There is a utility `HTableUtil` currently on MASTER that 
does this, but you can
 [[perf.hbase.write.mr.reducer]]
 === MapReduce: Skip The Reducer
 
-When writing a lot of data to an HBase table from a MR job (e.g., with 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]),
 and specifically where Puts are being emitted from the Mapper, skip the 
Reducer step.
+When writing a lot of data to an HBase table from a MR job (e.g., with 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]),
 and specifically where Puts are being emitted from the Mapper, skip the 
Reducer step.
 When a Reducer step is used, all of the output (Puts) from the Mapper will get 
spooled to disk, then sorted/shuffled to other Reducers that will most likely 
be off-node.
 It's far more efficient to just write directly to HBase.
 
@@ -600,7 +600,7 @@ For example, here is a good general thread on what to look 
at addressing read-ti
 [[perf.hbase.client.caching]]
 === Scan Caching
 
-If HBase is used as an input source for a MapReduce job, for example, make 
sure that the input 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan]
 instance to the MapReduce job has `setCaching` set to something greater than 
the default (which is 1). Using the default value means that the map-task will 
make call back to the region-server for every record processed.
+If HBase is used as an input source for a MapReduce job, for example, make 
sure that the input 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan]
 instance to the MapReduce job has `setCaching` set to something greater than 
the default (which is 1). Using the default value means that the map-task will 
make call back to the region-server for every record processed.
 Setting this value to 500, for example, will transfer 500 rows at a time to 
the client to be processed.
 There is a cost/benefit to have the cache value be large because it costs more 
in memory for both client and RegionServer, so bigger isn't always better.
 
@@ -649,7 +649,7 @@ For MapReduce jobs that use HBase tables as a source, if 
there a pattern where t
 === Close ResultScanners
 
 This isn't so much about improving performance but rather _avoiding_ 
performance problems.
-If you forget to close 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ResultScanner.html[ResultScanners]
 you can cause problems on the RegionServers.
+If you forget to close 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ResultScanner.html[ResultScanners]
 you can cause problems on the RegionServers.
 Always have ResultScanner processing enclosed in try/catch blocks.
 
 [source,java]
@@ -669,7 +669,7 @@ table.close();
 [[perf.hbase.client.blockcache]]
 === Block Cache
 
-link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan]
 instances can be set to use the block cache in the RegionServer via the 
`setCacheBlocks` method.
+link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan]
 instances can be set to use the block cache in the RegionServer via the 
`setCacheBlocks` method.
 For input Scans to MapReduce jobs, this should be `false`.
 For frequently accessed rows, it is advisable to use the block cache.
 
@@ -679,8 +679,8 @@ See <<offheap.blockcache>>
 [[perf.hbase.client.rowkeyonly]]
 === Optimal Loading of Row Keys
 
-When performing a table 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[scan]
 where only the row keys are needed (no families, qualifiers, values or 
timestamps), add a FilterList with a `MUST_PASS_ALL` operator to the scanner 
using `setFilter`.
-The filter list should include both a 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.html[FirstKeyOnlyFilter]
 and a 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html[KeyOnlyFilter].
+When performing a table 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[scan]
 where only the row keys are needed (no families, qualifiers, values or 
timestamps), add a FilterList with a `MUST_PASS_ALL` operator to the scanner 
using `setFilter`.
+The filter list should include both a 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.html[FirstKeyOnlyFilter]
 and a 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html[KeyOnlyFilter].
 Using this filter combination will result in a worst case scenario of a 
RegionServer reading a single value from disk and minimal network traffic to 
the client for a single row.
 
 [[perf.hbase.read.dist]]
@@ -816,7 +816,7 @@ In this case, special care must be taken to regularly 
perform major compactions
 As is documented in <<datamodel>>, marking rows as deleted creates additional 
StoreFiles which then need to be processed on reads.
 Tombstones only get cleaned up with major compactions.
 
-See also <<compaction>> and 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact-org.apache.hadoop.hbase.TableName-[Admin.majorCompact].
+See also <<compaction>> and 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact-org.apache.hadoop.hbase.TableName-[Admin.majorCompact].
 
 [[perf.deleting.rpc]]
 === Delete RPC Behavior
@@ -825,7 +825,7 @@ Be aware that `Table.delete(Delete)` doesn't use the 
writeBuffer.
 It will execute an RegionServer RPC with each invocation.
 For a large number of deletes, consider `Table.delete(List)`.
 
-See 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete-org.apache.hadoop.hbase.client.Delete-[hbase.client.Delete]
+See 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete-org.apache.hadoop.hbase.client.Delete-[hbase.client.Delete]
 
 [[perf.hdfs]]
 == HDFS

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/preface.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/preface.adoc 
b/src/main/asciidoc/_chapters/preface.adoc
index ed2ca7a..280f2d8 100644
--- a/src/main/asciidoc/_chapters/preface.adoc
+++ b/src/main/asciidoc/_chapters/preface.adoc
@@ -27,11 +27,11 @@
 :icons: font
 :experimental:
 
-This is the official reference guide for the 
link:http://hbase.apache.org/[HBase] version it ships with.
+This is the official reference guide for the 
link:https://hbase.apache.org/[HBase] version it ships with.
 
 Herein you will find either the definitive documentation on an HBase topic as 
of its
 standing when the referenced HBase version shipped, or it will point to the 
location
-in link:http://hbase.apache.org/apidocs/index.html[Javadoc] or
+in link:https://hbase.apache.org/apidocs/index.html[Javadoc] or
 link:https://issues.apache.org/jira/browse/HBASE[JIRA] where the pertinent 
information can be found.
 
 .About This Guide

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/rpc.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/rpc.adoc 
b/src/main/asciidoc/_chapters/rpc.adoc
index 1d363eb..fbfba6c 100644
--- a/src/main/asciidoc/_chapters/rpc.adoc
+++ b/src/main/asciidoc/_chapters/rpc.adoc
@@ -28,7 +28,7 @@
 :icons: font
 :experimental:
 
-In 0.95, all client/server communication is done with 
link:https://developers.google.com/protocol-buffers/[protobuf'ed] Messages 
rather than with 
link:http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Writable.html[Hadoop
+In 0.95, all client/server communication is done with 
link:https://developers.google.com/protocol-buffers/[protobuf'ed] Messages 
rather than with 
link:https://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Writable.html[Hadoop
             Writables].
 Our RPC wire format therefore changes.
 This document describes the client/server request/response protocol and our 
new RPC wire-format.

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc 
b/src/main/asciidoc/_chapters/schema_design.adoc
index 92064ae..4cd7656 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -47,7 +47,7 @@ See also Robert Yokota's 
link:https://blogs.apache.org/hbase/entry/hbase-applica
 [[schema.creation]]
 ==  Schema Creation
 
-HBase schemas can be created or updated using the <<shell>> or by using 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html[Admin]
 in the Java API.
+HBase schemas can be created or updated using the <<shell>> or by using 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html[Admin]
 in the Java API.
 
 Tables must be disabled when making ColumnFamily modifications, for example:
 
@@ -223,7 +223,7 @@ You could also optimize things so that certain pairs of 
keys were always in the
 A third common trick for preventing hotspotting is to reverse a fixed-width or 
numeric row key so that the part that changes the most often (the least 
significant digit) is first.
 This effectively randomizes row keys, but sacrifices row ordering properties.
 
-See 
https://communities.intel.com/community/itpeernetwork/datastack/blog/2013/11/10/discussion-on-designing-hbase-tables,
 and link:http://phoenix.apache.org/salted.html[article on Salted Tables] from 
the Phoenix project, and the discussion in the comments of 
link:https://issues.apache.org/jira/browse/HBASE-11682[HBASE-11682] for more 
information about avoiding hotspotting.
+See 
https://communities.intel.com/community/itpeernetwork/datastack/blog/2013/11/10/discussion-on-designing-hbase-tables,
 and link:https://phoenix.apache.org/salted.html[article on Salted Tables] from 
the Phoenix project, and the discussion in the comments of 
link:https://issues.apache.org/jira/browse/HBASE-11682[HBASE-11682] for more 
information about avoiding hotspotting.
 
 [[timeseries]]
 ===  Monotonically Increasing Row Keys/Timeseries Data
@@ -433,7 +433,7 @@ public static byte[][] getHexSplits(String startKey, String 
endKey, int numRegio
 [[schema.versions.max]]
 === Maximum Number of Versions
 
-The maximum number of row versions to store is configured per column family 
via 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
+The maximum number of row versions to store is configured per column family 
via 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
 The default for max versions is 1.
 This is an important parameter because as described in <<datamodel>> section 
HBase does _not_ overwrite row values, but rather stores different values per 
row by time (and qualifier). Excess versions are removed during major 
compactions.
 The number of max versions may need to be increased or decreased depending on 
application needs.
@@ -443,14 +443,14 @@ It is not recommended setting the number of max versions 
to an exceedingly high
 [[schema.minversions]]
 ===  Minimum Number of Versions
 
-Like maximum number of row versions, the minimum number of row versions to 
keep is configured per column family via 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
+Like maximum number of row versions, the minimum number of row versions to 
keep is configured per column family via 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor].
 The default for min versions is 0, which means the feature is disabled.
 The minimum number of row versions parameter is used together with the 
time-to-live parameter and can be combined with the number of row versions 
parameter to allow configurations such as "keep the last T minutes worth of 
data, at most N versions, _but keep at least M versions around_" (where M is 
the value for minimum number of row versions, M<N). This parameter should only 
be set when time-to-live is enabled for a column family and must be less than 
the number of row versions.
 
 [[supported.datatypes]]
 ==  Supported Datatypes
 
-HBase supports a "bytes-in/bytes-out" interface via 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html[Put]
 and 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Result.html[Result],
 so anything that can be converted to an array of bytes can be stored as a 
value.
+HBase supports a "bytes-in/bytes-out" interface via 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html[Put]
 and 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Result.html[Result],
 so anything that can be converted to an array of bytes can be stored as a 
value.
 Input could be strings, numbers, complex objects, or even images as long as 
they can rendered as bytes.
 
 There are practical limits to the size of values (e.g., storing 10-50MB 
objects in HBase would probably be too much to ask); search the mailing list 
for conversations on this topic.
@@ -459,7 +459,7 @@ Take that into consideration when making your design, as 
well as block size for
 
 === Counters
 
-One supported datatype that deserves special mention are "counters" (i.e., the 
ability to do atomic increments of numbers). See 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#increment%28org.apache.hadoop.hbase.client.Increment%29[Increment]
 in `Table`.
+One supported datatype that deserves special mention are "counters" (i.e., the 
ability to do atomic increments of numbers). See 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#increment%28org.apache.hadoop.hbase.client.Increment%29[Increment]
 in `Table`.
 
 Synchronization on counters are done on the RegionServer, not in the client.
 
@@ -479,7 +479,7 @@ Store files which contains only expired rows are deleted on 
minor compaction.
 Setting `hbase.store.delete.expired.storefile` to `false` disables this 
feature.
 Setting minimum number of versions to other than 0 also disables this.
 
-See 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor]
 for more information.
+See 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html[HColumnDescriptor]
 for more information.
 
 Recent versions of HBase also support setting time to live on a per cell basis.
 See link:https://issues.apache.org/jira/browse/HBASE-10560[HBASE-10560] for 
more information.
@@ -494,7 +494,7 @@ There are two notable differences between cell TTL handling 
and ColumnFamily TTL
 ==  Keeping Deleted Cells
 
 By default, delete markers extend back to the beginning of time.
-Therefore, 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html[Get]
 or 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan]
 operations will not see a deleted cell (row or column), even when the Get or 
Scan operation indicates a time range before the delete marker was placed.
+Therefore, 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html[Get]
 or 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html[Scan]
 operations will not see a deleted cell (row or column), even when the Get or 
Scan operation indicates a time range before the delete marker was placed.
 
 ColumnFamilies can optionally keep deleted cells.
 In this case, deleted cells can still be retrieved, as long as these 
operations specify a time range that ends before the timestamp of any delete 
that would affect the cells.
@@ -684,7 +684,7 @@ in the table (e.g. make sure values are in the range 1-10). 
Constraints could
 also be used to enforce referential integrity, but this is strongly discouraged
 as it will dramatically decrease the write throughput of the tables where 
integrity
 checking is enabled. Extensive documentation on using Constraints can be found 
at
-link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/constraint/Constraint.html[Constraint]
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/constraint/Constraint.html[Constraint]
 since version 0.94.
 
 [[schema.casestudies]]
@@ -1095,7 +1095,7 @@ The tl;dr version is that you should probably go with one 
row per user+value, an
 
 Your two options mirror a common question people have when designing HBase 
schemas: should I go "tall" or "wide"? Your first schema is "tall": each row 
represents one value for one user, and so there are many rows in the table for 
each user; the row key is user + valueid, and there would be (presumably) a 
single column qualifier that means "the value". This is great if you want to 
scan over rows in sorted order by row key (thus my question above, about 
whether these ids are sorted correctly). You can start a scan at any 
user+valueid, read the next 30, and be done.
 What you're giving up is the ability to have transactional guarantees around 
all the rows for one user, but it doesn't sound like you need that.
-Doing it this way is generally recommended (see here 
http://hbase.apache.org/book.html#schema.smackdown).
+Doing it this way is generally recommended (see here 
https://hbase.apache.org/book.html#schema.smackdown).
 
 Your second option is "wide": you store a bunch of values in one row, using 
different qualifiers (where the qualifier is the valueid). The simple way to do 
that would be to just store ALL values for one user in a single row.
 I'm guessing you jumped to the "paginated" version because you're assuming 
that storing millions of columns in a single row would be bad for performance, 
which may or may not be true; as long as you're not trying to do too much in a 
single request, or do things like scanning over and returning all of the cells 
in the row, it shouldn't be fundamentally worse.

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/security.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/security.adoc 
b/src/main/asciidoc/_chapters/security.adoc
index 6657b50..cca9364 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -354,7 +354,7 @@ grant 'rest_server', 'RWCA'
 
 For more information about ACLs, please see the 
<<hbase.accesscontrol.configuration>> section
 
-HBase REST gateway supports 
link:http://hadoop.apache.org/docs/stable/hadoop-auth/index.html[SPNEGO HTTP 
authentication] for client access to the gateway.
+HBase REST gateway supports 
link:https://hadoop.apache.org/docs/stable/hadoop-auth/index.html[SPNEGO HTTP 
authentication] for client access to the gateway.
 To enable REST gateway Kerberos authentication for client access, add the 
following to the `hbase-site.xml` file for every REST gateway.
 
 [source,xml]
@@ -390,7 +390,7 @@ Substitute the keytab for HTTP for _$KEYTAB_.
 
 HBase REST gateway supports different 'hbase.rest.authentication.type': 
simple, kerberos.
 You can also implement a custom authentication by implementing Hadoop 
AuthenticationHandler, then specify the full class name as 
'hbase.rest.authentication.type' value.
-For more information, refer to 
link:http://hadoop.apache.org/docs/stable/hadoop-auth/index.html[SPNEGO HTTP 
authentication].
+For more information, refer to 
link:https://hadoop.apache.org/docs/stable/hadoop-auth/index.html[SPNEGO HTTP 
authentication].
 
 [[security.rest.gateway]]
 === REST Gateway Impersonation Configuration
@@ -1390,11 +1390,11 @@ When you issue a Scan or Get, HBase uses your default 
set of authorizations to
 filter out cells that you do not have access to. A superuser can set the 
default
 set of authorizations for a given user by using the `set_auths` HBase Shell 
command
 or the
-link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityClient.html#setAuths-org.apache.hadoop.hbase.client.Connection-java.lang.String:A-java.lang.String-[VisibilityClient.setAuths()]
 method.
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityClient.html#setAuths-org.apache.hadoop.hbase.client.Connection-java.lang.String:A-java.lang.String-[VisibilityClient.setAuths()]
 method.
 
 You can specify a different authorization during the Scan or Get, by passing 
the
 AUTHORIZATIONS option in HBase Shell, or the
-link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setAuthorizations-org.apache.hadoop.hbase.security.visibility.Authorizations-[Scan.setAuthorizations()]
+link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setAuthorizations-org.apache.hadoop.hbase.security.visibility.Authorizations-[Scan.setAuthorizations()]
 method if you use the API. This authorization will be combined with your 
default
 set as an additional filter. It will further filter your results, rather than
 giving you additional authorization.
@@ -1644,7 +1644,7 @@ Rotate the Master Key::
 
 Bulk loading in secure mode is a bit more involved than normal setup, since 
the client has to transfer the ownership of the files generated from the 
MapReduce job to HBase.
 Secure bulk loading is implemented by a coprocessor, named
-link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.html[SecureBulkLoadEndpoint],
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.html[SecureBulkLoadEndpoint],
 which uses a staging directory configured by the configuration property 
`hbase.bulkload.staging.dir`, which defaults to
 _/tmp/hbase-staging/_.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/spark.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/spark.adoc 
b/src/main/asciidoc/_chapters/spark.adoc
index 774d137..416457b 100644
--- a/src/main/asciidoc/_chapters/spark.adoc
+++ b/src/main/asciidoc/_chapters/spark.adoc
@@ -27,7 +27,7 @@
 :icons: font
 :experimental:
 
-link:http://spark.apache.org/[Apache Spark] is a software framework that is 
used
+link:https://spark.apache.org/[Apache Spark] is a software framework that is 
used
 to process data in memory in a distributed manner, and is replacing MapReduce 
in
 many use cases.
 
@@ -151,7 +151,7 @@ access to HBase
 For examples of all these functionalities, see the HBase-Spark Module.
 
 == Spark Streaming
-http://spark.apache.org/streaming/[Spark Streaming] is a micro batching stream
+https://spark.apache.org/streaming/[Spark Streaming] is a micro batching stream
 processing framework built on top of Spark. HBase and Spark Streaming make 
great
 companions in that HBase can help serve the following benefits alongside Spark
 Streaming.

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/sql.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/sql.adoc 
b/src/main/asciidoc/_chapters/sql.adoc
index b1ad063..f1c445d 100644
--- a/src/main/asciidoc/_chapters/sql.adoc
+++ b/src/main/asciidoc/_chapters/sql.adoc
@@ -33,10 +33,10 @@ The following projects offer some support for SQL over 
HBase.
 [[phoenix]]
 === Apache Phoenix
 
-link:http://phoenix.apache.org[Apache Phoenix]
+link:https://phoenix.apache.org[Apache Phoenix]
 
 === Trafodion
 
-link:http://trafodion.incubator.apache.org/[Trafodion: Transactional 
SQL-on-HBase]
+link:https://trafodion.incubator.apache.org/[Trafodion: Transactional 
SQL-on-HBase]
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/thrift_filter_language.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/thrift_filter_language.adoc 
b/src/main/asciidoc/_chapters/thrift_filter_language.adoc
index da36cea..1c1279d 100644
--- a/src/main/asciidoc/_chapters/thrift_filter_language.adoc
+++ b/src/main/asciidoc/_chapters/thrift_filter_language.adoc
@@ -28,7 +28,7 @@
 :experimental:
 
 
-Apache link:http://thrift.apache.org/[Thrift] is a cross-platform, 
cross-language development framework.
+Apache link:https://thrift.apache.org/[Thrift] is a cross-platform, 
cross-language development framework.
 HBase includes a Thrift API and filter language.
 The Thrift API relies on client and server processes.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/tracing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/tracing.adoc 
b/src/main/asciidoc/_chapters/tracing.adoc
index 0cddd8a..9db4b7f 100644
--- a/src/main/asciidoc/_chapters/tracing.adoc
+++ b/src/main/asciidoc/_chapters/tracing.adoc
@@ -30,7 +30,7 @@
 :icons: font
 :experimental:
 
-link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added 
support for tracing requests through HBase, using the open source tracing 
library, link:http://htrace.incubator.apache.org/[HTrace].
+link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added 
support for tracing requests through HBase, using the open source tracing 
library, link:https://htrace.incubator.apache.org/[HTrace].
 Setting up tracing is quite simple, however it currently requires some very 
minor changes to your client code (it would not be very difficult to remove 
this requirement).
 
 [[tracing.spanreceivers]]
@@ -67,7 +67,7 @@ The `LocalFileSpanReceiver` looks in _hbase-site.xml_      
for a `hbase.local-fi
 
 HTrace also provides `ZipkinSpanReceiver` which converts spans to 
link:http://github.com/twitter/zipkin[Zipkin] span format and send them to 
Zipkin server. In order to use this span receiver, you need to install the jar 
of htrace-zipkin to your HBase's classpath on all of the nodes in your cluster.
 
-_htrace-zipkin_ is published to the 
link:http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.apache.htrace%22%20AND%20a%3A%22htrace-zipkin%22[Maven
 central repository]. You could get the latest version from there or just build 
it locally (see the link:http://htrace.incubator.apache.org/[HTrace] homepage 
for information on how to do this) and then copy it out to all nodes.
+_htrace-zipkin_ is published to the 
link:http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.apache.htrace%22%20AND%20a%3A%22htrace-zipkin%22[Maven
 central repository]. You could get the latest version from there or just build 
it locally (see the link:https://htrace.incubator.apache.org/[HTrace] homepage 
for information on how to do this) and then copy it out to all nodes.
 
 `ZipkinSpanReceiver` for properties called 
`hbase.htrace.zipkin.collector-hostname` and 
`hbase.htrace.zipkin.collector-port` in _hbase-site.xml_ with values describing 
the Zipkin collector server to which span information are sent.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc 
b/src/main/asciidoc/_chapters/troubleshooting.adoc
index 67a9def..ec0a34d 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -225,7 +225,7 @@ Search here first when you have an issue as its more than 
likely someone has alr
 [[trouble.resources.lists]]
 === Mailing Lists
 
-Ask a question on the link:http://hbase.apache.org/mail-lists.html[Apache 
HBase mailing lists].
+Ask a question on the link:https://hbase.apache.org/mail-lists.html[Apache 
HBase mailing lists].
 The 'dev' mailing list is aimed at the community of developers actually 
building Apache HBase and for features currently under development, and 'user' 
is generally used for questions on released versions of Apache HBase.
 Before going to the mailing list, make sure your question has not already been 
answered by searching the mailing list archives first.
 Use <<trouble.resources.searchhadoop>>.
@@ -596,7 +596,7 @@ See also Jesse Andersen's 
link:http://blog.cloudera.com/blog/2014/04/how-to-use-
 In some situations clients that fetch data from a RegionServer get a 
LeaseException instead of the usual <<trouble.client.scantimeout>>.
 Usually the source of the exception is 
`org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)` 
(line number may vary). It tends to happen in the context of a slow/freezing 
`RegionServer#next` call.
 It can be prevented by having `hbase.rpc.timeout` > 
`hbase.regionserver.lease.period`.
-Harsh J investigated the issue as part of the mailing list thread 
link:http://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E[HBase,
 mail # user - Lease does not exist exceptions]
+Harsh J investigated the issue as part of the mailing list thread 
link:https://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E[HBase,
 mail # user - Lease does not exist exceptions]
 
 [[trouble.client.scarylogs]]
 === Shell or client application throws lots of scary exceptions during normal 
operation
@@ -802,7 +802,7 @@ hadoop fs -du /hbase/myTable
 ----
 ...returns a list of the regions under the HBase table 'myTable' and their 
disk utilization.
 
-For more information on HDFS shell commands, see the 
link:http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/FileSystemShell.html[HDFS
 FileSystem Shell documentation].
+For more information on HDFS shell commands, see the 
link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/FileSystemShell.html[HDFS
 FileSystem Shell documentation].
 
 [[trouble.namenode.hbase.objects]]
 === Browsing HDFS for HBase Objects
@@ -1174,7 +1174,7 @@ If you have a DNS server, you can set 
`hbase.zookeeper.dns.interface` and `hbase
 
 ZooKeeper is the cluster's "canary in the mineshaft". It'll be the first to 
notice issues if any so making sure its happy is the short-cut to a humming 
cluster.
 
-See the link:http://wiki.apache.org/hadoop/ZooKeeper/Troubleshooting[ZooKeeper 
Operating Environment Troubleshooting] page.
+See the 
link:https://wiki.apache.org/hadoop/ZooKeeper/Troubleshooting[ZooKeeper 
Operating Environment Troubleshooting] page.
 It has suggestions and tools for checking disk and networking performance; i.e.
 the operating environment your ZooKeeper and HBase are running in.
 
@@ -1313,7 +1313,7 @@ These changes were backported to HBase 0.98.x and apply 
to all newer versions.
 == HBase and HDFS
 
 General configuration guidance for Apache HDFS is out of the scope of this 
guide.
-Refer to the documentation available at http://hadoop.apache.org/ for 
extensive information about configuring HDFS.
+Refer to the documentation available at https://hadoop.apache.org/ for 
extensive information about configuring HDFS.
 This section deals with HDFS in terms of HBase.
 
 In most cases, HBase stores its data in Apache HDFS.

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc 
b/src/main/asciidoc/_chapters/unit_testing.adoc
index 55dedf4..e503f81 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -171,7 +171,7 @@ Similarly, you can now expand into other operations such as 
Get, Scan, or Delete
 
 == MRUnit
 
-link:http://mrunit.apache.org/[Apache MRUnit] is a library that allows you to 
unit-test MapReduce jobs.
+link:https://mrunit.apache.org/[Apache MRUnit] is a library that allows you to 
unit-test MapReduce jobs.
 You can use it to test HBase jobs in the same way as other MapReduce jobs.
 
 Given a MapReduce job that writes to an HBase table called `MyTest`, which has 
one column family called `CF`, the reducer of such a job could look like the 
following:

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 086fa86..9225abd 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -125,14 +125,14 @@ for warning about incompatible changes). All effort will 
be made to provide a de
 [[hbase.client.api.surface]]
 ==== HBase API Surface
 
-HBase has a lot of API points, but for the compatibility matrix above, we 
differentiate between Client API, Limited Private API, and Private API. HBase 
uses 
link:http://yetus.apache.org/documentation/0.5.0/interface-classification/[Apache
 Yetus Audience Annotations] to guide downstream expectations for stability.
+HBase has a lot of API points, but for the compatibility matrix above, we 
differentiate between Client API, Limited Private API, and Private API. HBase 
uses 
link:https://yetus.apache.org/documentation/0.5.0/interface-classification/[Apache
 Yetus Audience Annotations] to guide downstream expectations for stability.
 
-* InterfaceAudience 
(link:http://yetus.apache.org/documentation/0.5.0/audience-annotations-apidocs/org/apache/yetus/audience/InterfaceAudience.html[javadocs]):
 captures the intended audience, possible values include:
+* InterfaceAudience 
(link:https://yetus.apache.org/documentation/0.5.0/audience-annotations-apidocs/org/apache/yetus/audience/InterfaceAudience.html[javadocs]):
 captures the intended audience, possible values include:
   - Public: safe for end users and external projects
   - LimitedPrivate: used for internals we expect to be pluggable, such as 
coprocessors
   - Private: strictly for use within HBase itself
 Classes which are defined as `IA.Private` may be used as parameters or return 
values for interfaces which are declared `IA.LimitedPrivate`. Treat the 
`IA.Private` object as opaque; do not try to access its methods or fields 
directly.
-* InterfaceStability 
(link:http://yetus.apache.org/documentation/0.5.0/audience-annotations-apidocs/org/apache/yetus/audience/InterfaceStability.html[javadocs]):
 describes what types of interface changes are permitted. Possible values 
include:
+* InterfaceStability 
(link:https://yetus.apache.org/documentation/0.5.0/audience-annotations-apidocs/org/apache/yetus/audience/InterfaceStability.html[javadocs]):
 describes what types of interface changes are permitted. Possible values 
include:
   - Stable: the interface is fixed and is not expected to change
   - Evolving: the interface may change in future minor verisons
   - Unstable: the interface may change at any time
@@ -159,7 +159,7 @@ HBase Private API::
 === Pre 1.0 versions
 
 .HBase Pre-1.0 versions are all EOM
-NOTE: For new installations, do not deploy 0.94.y, 0.96.y, or 0.98.y.  Deploy 
our stable version. See 
link:https://issues.apache.org/jira/browse/HBASE-11642[EOL 0.96], 
link:https://issues.apache.org/jira/browse/HBASE-16215[clean up of EOM 
releases], and link:http://www.apache.org/dist/hbase/[the header of our 
downloads].
+NOTE: For new installations, do not deploy 0.94.y, 0.96.y, or 0.98.y.  Deploy 
our stable version. See 
link:https://issues.apache.org/jira/browse/HBASE-11642[EOL 0.96], 
link:https://issues.apache.org/jira/browse/HBASE-16215[clean up of EOM 
releases], and link:https://www.apache.org/dist/hbase/[the header of our 
downloads].
 
 Before the semantic versioning scheme pre-1.0, HBase tracked either Hadoop's 
versions (0.2x) or 0.9x versions. If you are into the arcane, checkout our old 
wiki page on 
link:https://web.archive.org/web/20150905071342/https://wiki.apache.org/hadoop/Hbase/HBaseVersions[HBase
 Versioning] which tries to connect the HBase version dots. Below sections 
cover ONLY the releases before 1.0.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/_chapters/zookeeper.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/zookeeper.adoc 
b/src/main/asciidoc/_chapters/zookeeper.adoc
index 5f92ff0..33eeadb 100644
--- a/src/main/asciidoc/_chapters/zookeeper.adoc
+++ b/src/main/asciidoc/_chapters/zookeeper.adoc
@@ -106,7 +106,7 @@ The newer version, the better. ZooKeeper 3.4.x is required 
as of HBase 1.0.0
 .ZooKeeper Maintenance
 [CAUTION]
 ====
-Be sure to set up the data dir cleaner described under 
link:http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_maintenance[ZooKeeper
+Be sure to set up the data dir cleaner described under 
link:https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_maintenance[ZooKeeper
         Maintenance] else you could have 'interesting' problems a couple of 
months in; i.e.
 zookeeper could start dropping sessions if it has to run through a directory 
of hundreds of thousands of logs which is wont to do around leader reelection 
time -- a process rare but run on occasion whether because a machine is dropped 
or happens to hiccup.
 ====
@@ -135,9 +135,9 @@ ${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
 Note that you can use HBase in this manner to spin up a ZooKeeper cluster, 
unrelated to HBase.
 Just make sure to set `HBASE_MANAGES_ZK` to `false`      if you want it to 
stay up across HBase restarts so that when HBase shuts down, it doesn't take 
ZooKeeper down with it.
 
-For more information about running a distinct ZooKeeper cluster, see the 
ZooKeeper 
link:http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html[Getting
+For more information about running a distinct ZooKeeper cluster, see the 
ZooKeeper 
link:https://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html[Getting
         Started Guide].
-Additionally, see the 
link:http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7[ZooKeeper Wiki] or the 
link:http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup[ZooKeeper
+Additionally, see the 
link:https://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7[ZooKeeper Wiki] or the 
link:https://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup[ZooKeeper
         documentation] for more information on ZooKeeper sizing.
 
 [[zk.sasl.auth]]

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/main/asciidoc/book.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index 2b9bf26..519cf9a 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -42,7 +42,7 @@
 // Logo for HTML -- doesn't render in PDF
 ++++
 <div>
-  <a href="http://hbase.apache.org";><img src="images/hbase_logo_with_orca.png" 
alt="Apache HBase Logo" /></a>
+  <a href="https://hbase.apache.org";><img 
src="images/hbase_logo_with_orca.png" alt="Apache HBase Logo" /></a>
 </div>
 ++++
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/site/asciidoc/acid-semantics.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/acid-semantics.adoc 
b/src/site/asciidoc/acid-semantics.adoc
index 0038901..b557165 100644
--- a/src/site/asciidoc/acid-semantics.adoc
+++ b/src/site/asciidoc/acid-semantics.adoc
@@ -82,7 +82,7 @@ NOTE:This is not true _across rows_ for multirow batch 
mutations.
 A scan is *not* a consistent view of a table. Scans do *not* exhibit _snapshot 
isolation_.
 
 Rather, scans have the following properties:
-. Any row returned by the scan will be a consistent view (i.e. that version of 
the complete row existed at some point in time)footnoteref[consistency,A 
consistent view is not guaranteed intra-row scanning -- i.e. fetching a portion 
of a row in one RPC then going back to fetch another portion of the row in a 
subsequent RPC. Intra-row scanning happens when you set a limit on how many 
values to return per Scan#next (See 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)"[Scan#setBatch(int)]).]
+. Any row returned by the scan will be a consistent view (i.e. that version of 
the complete row existed at some point in time)footnoteref[consistency,A 
consistent view is not guaranteed intra-row scanning -- i.e. fetching a portion 
of a row in one RPC then going back to fetch another portion of the row in a 
subsequent RPC. Intra-row scanning happens when you set a limit on how many 
values to return per Scan#next (See 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)"[Scan#setBatch(int)]).]
 . A scan will always reflect a view of the data _at least as new as_ the 
beginning of the scan. This satisfies the visibility guarantees enumerated 
below.
 .. For example, if client A writes data X and then communicates via a side 
channel to client B, any scans started by client B will contain data at least 
as new as X.
 .. A scan _must_ reflect all mutations committed prior to the construction of 
the scanner, and _may_ reflect some mutations committed subsequent to the 
construction of the scanner.

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/site/asciidoc/cygwin.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/cygwin.adoc b/src/site/asciidoc/cygwin.adoc
index 11c4df4..1056dec 100644
--- a/src/site/asciidoc/cygwin.adoc
+++ b/src/site/asciidoc/cygwin.adoc
@@ -22,11 +22,11 @@ under the License.
 
 == Introduction
 
-link:http://hbase.apache.org[Apache HBase (TM)] is a distributed, 
column-oriented store, modeled after Google's 
link:http://research.google.com/archive/bigtable.html[BigTable]. Apache HBase 
is built on top of link:http://hadoop.apache.org[Hadoop] for its 
link:http://hadoop.apache.org/mapreduce[MapReduce] 
link:http://hadoop.apache.org/hdfs[distributed file system] implementations. 
All these projects are open-source and part of the 
link:http://www.apache.org[Apache Software Foundation].
+link:https://hbase.apache.org[Apache HBase (TM)] is a distributed, 
column-oriented store, modeled after Google's 
link:http://research.google.com/archive/bigtable.html[BigTable]. Apache HBase 
is built on top of link:https://hadoop.apache.org[Hadoop] for its 
link:https://hadoop.apache.org/mapreduce[MapReduce] 
link:https://hadoop.apache.org/hdfs[distributed file system] implementations. 
All these projects are open-source and part of the 
link:https://www.apache.org[Apache Software Foundation].
 
 == Purpose
 
-This document explains the *intricacies* of running Apache HBase on Windows 
using Cygwin* as an all-in-one single-node installation for testing and 
development. The HBase 
link:http://hbase.apache.org/apidocs/overview-summary.html#overview_description[Overview]
 and link:book.html#getting_started[QuickStart] guides on the other hand go a 
long way in explaning how to setup link:http://hadoop.apache.org/hbase[HBase] 
in more complex deployment scenarios.
+This document explains the *intricacies* of running Apache HBase on Windows 
using Cygwin* as an all-in-one single-node installation for testing and 
development. The HBase 
link:https://hbase.apache.org/apidocs/overview-summary.html#overview_description[Overview]
 and link:book.html#getting_started[QuickStart] guides on the other hand go a 
long way in explaning how to setup link:https://hadoop.apache.org/hbase[HBase] 
in more complex deployment scenarios.
 
 == Installation
 
@@ -86,7 +86,7 @@ HBase (and Hadoop) rely on 
link:http://nl.wikipedia.org/wiki/Secure_Shell[*SSH*]
 
 === HBase
 
-Download the *latest release* of Apache HBase from 
link:http://www.apache.org/dyn/closer.cgi/hbase/. As the Apache HBase 
distributable is just a zipped archive, installation is as simple as unpacking 
the archive so it ends up in its final *installation* directory. Notice that 
HBase has to be installed in Cygwin and a good directory suggestion is to use 
`/usr/local/` (or [`*Root* directory]\usr\local` in Windows slang). You should 
end up with a `/usr/local/hbase-_versi` installation in Cygwin.
+Download the *latest release* of Apache HBase from 
link:https://www.apache.org/dyn/closer.cgi/hbase/. As the Apache HBase 
distributable is just a zipped archive, installation is as simple as unpacking 
the archive so it ends up in its final *installation* directory. Notice that 
HBase has to be installed in Cygwin and a good directory suggestion is to use 
`/usr/local/` (or [`*Root* directory]\usr\local` in Windows slang). You should 
end up with a `/usr/local/hbase-_versi` installation in Cygwin.
 
 This finishes installation. We go on with the configuration.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/site/asciidoc/index.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/index.adoc b/src/site/asciidoc/index.adoc
index 9b31c49..dd19a99 100644
--- a/src/site/asciidoc/index.adoc
+++ b/src/site/asciidoc/index.adoc
@@ -20,7 +20,7 @@ under the License.
 = Apache HBase&#153; Home
 
 .Welcome to Apache HBase(TM)
-link:http://www.apache.org/[Apache HBase(TM)] is the 
link:http://hadoop.apache.org[Hadoop] database, a distributed, scalable, big 
data store.
+link:https://www.apache.org/[Apache HBase(TM)] is the 
link:https://hadoop.apache.org[Hadoop] database, a distributed, scalable, big 
data store.
 
 .When Would I Use Apache HBase?
 Use Apache HBase when you need random, realtime read/write access to your Big 
Data. +

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/site/asciidoc/metrics.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/metrics.adoc b/src/site/asciidoc/metrics.adoc
index be7d9a5..e44db4c 100644
--- a/src/site/asciidoc/metrics.adoc
+++ b/src/site/asciidoc/metrics.adoc
@@ -20,13 +20,13 @@ under the License.
 = Apache HBase (TM) Metrics
 
 == Introduction
-Apache HBase (TM) emits Hadoop 
link:http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html[metrics].
+Apache HBase (TM) emits Hadoop 
link:https://hadoop.apache.org/core/docs/stable/api/org/apache/hadoop/metrics/package-summary.html[metrics].
 
 == Setup
 
-First read up on Hadoop 
link:http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html[metrics].
+First read up on Hadoop 
link:https://hadoop.apache.org/core/docs/stable/api/org/apache/hadoop/metrics/package-summary.html[metrics].
 
-If you are using ganglia, the 
link:http://wiki.apache.org/hadoop/GangliaMetrics[GangliaMetrics] wiki page is 
useful read.
+If you are using ganglia, the 
link:https://wiki.apache.org/hadoop/GangliaMetrics[GangliaMetrics] wiki page is 
useful read.
 
 To have HBase emit metrics, edit `$HBASE_HOME/conf/hadoop-metrics.properties` 
and enable metric 'contexts' per plugin.  As of this writing, hadoop supports 
*file* and *ganglia* plugins. Yes, the hbase metrics files is named 
hadoop-metrics rather than _hbase-metrics_ because currently at least the 
hadoop metrics system has the properties filename hardcoded. Per metrics 
_context_, comment out the NullContext and enable one or more plugins instead.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/site/asciidoc/old_news.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/old_news.adoc b/src/site/asciidoc/old_news.adoc
index ae44caa..c5cf993 100644
--- a/src/site/asciidoc/old_news.adoc
+++ b/src/site/asciidoc/old_news.adoc
@@ -57,7 +57,7 @@ October 25th, 2012:: 
link:http://www.meetup.com/HBase-NYC/events/81728932/[Strat
 
 September 11th, 2012:: 
link:http://www.meetup.com/hbaseusergroup/events/80621872/[Contributor's 
Pow-Wow at HortonWorks HQ.]
 
-August 8th, 2012:: link:http://www.apache.org/dyn/closer.cgi/hbase/[Apache 
HBase 0.94.1 is available for download]
+August 8th, 2012:: link:https://www.apache.org/dyn/closer.cgi/hbase/[Apache 
HBase 0.94.1 is available for download]
 
 June 15th, 2012:: 
link:http://www.meetup.com/hbaseusergroup/events/59829652/[Birds-of-a-feather] 
in San Jose, day after:: link:http://hadoopsummit.org[Hadoop Summit]
 
@@ -69,9 +69,9 @@ March 27th, 2012:: 
link:http://www.meetup.com/hbaseusergroup/events/56021562/[Me
 
 January 19th, 2012:: 
link:http://www.meetup.com/hbaseusergroup/events/46702842/[Meetup @ EBay]
 
-January 23rd, 2012:: Apache HBase 0.92.0 released. 
link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+January 23rd, 2012:: Apache HBase 0.92.0 released. 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
 
-December 23rd, 2011:: Apache HBase 0.90.5 released. 
link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+December 23rd, 2011:: Apache HBase 0.90.5 released. 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
 
 November 29th, 2011:: 
link:http://www.meetup.com/hackathon/events/41025972/[Developer Pow-Wow in SF] 
at Salesforce HQ
 
@@ -83,9 +83,9 @@ June 30th, 2011:: 
link:http://www.meetup.com/hbaseusergroup/events/20572251/[HBa
 
 June 8th, 2011:: 
link:http://berlinbuzzwords.de/wiki/hbase-workshop-and-hackathon[HBase 
Hackathon] in Berlin to coincide with:: link:http://berlinbuzzwords.de/[Berlin 
Buzzwords]
 
-May 19th, 2011: Apache HBase 0.90.3 released. 
link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+May 19th, 2011: Apache HBase 0.90.3 released. 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
 
-April 12th, 2011: Apache HBase 0.90.2 released. 
link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+April 12th, 2011: Apache HBase 0.90.2 released. 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
 
 March 21st, 2011:: link:http://www.meetup.com/hackathon/events/16770852/[HBase 
0.92 Hackathon at StumbleUpon, SF]
 February 22nd, 2011:: 
link:http://www.meetup.com/hbaseusergroup/events/16492913/[HUG12: February 
HBase User Group at StumbleUpon SF]
@@ -105,7 +105,7 @@ March 10th, 2010:: 
link:http://www.meetup.com/hbaseusergroup/calendar/12689351/[
 
 January 27th, 2010:: Sign up for the 
link:http://www.meetup.com/hbaseusergroup/calendar/12241393/[HBase User Group 
Meeting, HUG8], at StumbleUpon in SF
 
-September 8th, 2010:: Apache HBase 0.20.0 is faster, stronger, slimmer, and 
sweeter tasting than any previous Apache HBase release.  Get it off the 
link:http://www.apache.org/dyn/closer.cgi/hbase/[Releases] page.
+September 8th, 2010:: Apache HBase 0.20.0 is faster, stronger, slimmer, and 
sweeter tasting than any previous Apache HBase release.  Get it off the 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Releases] page.
 
 November 2-6th, 2009:: link:http://dev.us.apachecon.com/c/acus2009/[ApacheCon] 
in Oakland. The Apache Foundation will be celebrating its 10th anniversary in 
beautiful Oakland by the Bay. Lots of good talks and meetups including an HBase 
presentation by a couple of the lads.
 
@@ -113,7 +113,7 @@ October 2nd, 2009:: HBase at Hadoop World in NYC. A few of 
us will be talking on
 
 August 7th-9th, 2009:: HUG7 and HBase Hackathon at StumbleUpon in SF: Sign up 
for the:: link:http://www.meetup.com/hbaseusergroup/calendar/10950511/[HBase 
User Group Meeting, HUG7] or for the 
link:http://www.meetup.com/hackathon/calendar/10951718/[Hackathon] or for both 
(all are welcome!).
 
-June, 2009::  HBase at HadoopSummit2009 and at NOSQL: See the 
link:http://wiki.apache.org/hadoop/HBase/HBasePresentations[presentations]
+June, 2009::  HBase at HadoopSummit2009 and at NOSQL: See the 
link:https://wiki.apache.org/hadoop/HBase/HBasePresentations[presentations]
 
 March 3rd, 2009 :: HUG6 -- 
link:http://www.meetup.com/hbaseusergroup/calendar/9764004/[HBase User Group 6]
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/site/asciidoc/sponsors.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/sponsors.adoc b/src/site/asciidoc/sponsors.adoc
index 4d7ebf3..e6fec1b 100644
--- a/src/site/asciidoc/sponsors.adoc
+++ b/src/site/asciidoc/sponsors.adoc
@@ -19,7 +19,7 @@ under the License.
 
 = Apache HBase(TM) Sponsors
 
-First off, thanks to link:http://www.apache.org/foundation/thanks.html[all who 
sponsor] our parent, the Apache Software Foundation.
+First off, thanks to link:https://www.apache.org/foundation/thanks.html[all 
who sponsor] our parent, the Apache Software Foundation.
 
 The below companies have been gracious enough to provide their commerical tool 
offerings free of charge to the Apache HBase(TM) project.
 
@@ -32,5 +32,5 @@ The below companies have been gracious enough to provide 
their commerical tool o
 * Thank you to Boris at link:http://www.vectorportal.com/[Vector Portal] for 
granting us a license on the image on which our logo is based.
 
 == Sponsoring the Apache Software Foundation">
-To contribute to the Apache Software Foundation, a good idea in our opinion, 
see the link:http://www.apache.org/foundation/sponsorship.html[ASF Sponsorship] 
page.
+To contribute to the Apache Software Foundation, a good idea in our opinion, 
see the link:https://www.apache.org/foundation/sponsorship.html[ASF 
Sponsorship] page.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8e0571a3/src/site/xdoc/metrics.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/metrics.xml b/src/site/xdoc/metrics.xml
index f3ab7d7..620c14b 100644
--- a/src/site/xdoc/metrics.xml
+++ b/src/site/xdoc/metrics.xml
@@ -29,11 +29,11 @@ under the License.
   <body>
     <section name="Introduction">
       <p>
-      Apache HBase (TM) emits Hadoop <a 
href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html";>metrics</a>.
+      Apache HBase (TM) emits Hadoop <a 
href="http://hadoop.apache.org/core/docs/stable/api/org/apache/hadoop/metrics/package-summary.html";>metrics</a>.
       </p>
       </section>
       <section name="Setup">
-      <p>First read up on Hadoop <a 
href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html";>metrics</a>.
+      <p>First read up on Hadoop <a 
href="http://hadoop.apache.org/core/docs/stable/api/org/apache/hadoop/metrics/package-summary.html";>metrics</a>.
       If you are using ganglia, the <a 
href="http://wiki.apache.org/hadoop/GangliaMetrics";>GangliaMetrics</a>
       wiki page is useful read.</p>
       <p>To have HBase emit metrics, edit 
<code>$HBASE_HOME/conf/hadoop-metrics.properties</code>

Reply via email to