http://git-wip-us.apache.org/repos/asf/hbase/blob/3e8ede1d/src/main/asciidoc/_chapters/external_apis.adoc ---------------------------------------------------------------------- diff --git a/src/main/asciidoc/_chapters/external_apis.adoc b/src/main/asciidoc/_chapters/external_apis.adoc index 37156ca..44603f0 100644 --- a/src/main/asciidoc/_chapters/external_apis.adoc +++ b/src/main/asciidoc/_chapters/external_apis.adoc @@ -27,32 +27,454 @@ :icons: font :experimental: -This chapter will cover access to Apache HBase either through non-Java languages, or through custom protocols. -For information on using the native HBase APIs, refer to link:http://hbase.apache.org/apidocs/index.html[User API Reference] and the new <<hbase_apis,HBase APIs>> chapter. +This chapter will cover access to Apache HBase either through non-Java languages and +through custom protocols. For information on using the native HBase APIs, refer to +link:http://hbase.apache.org/apidocs/index.html[User API Reference] and the +<<hbase_apis,HBase APIs>> chapter. -[[nonjava.jvm]] -== Non-Java Languages Talking to the JVM +== REST -Currently the documentation on this topic is in the link:http://wiki.apache.org/hadoop/Hbase[Apache HBase Wiki]. -See also the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/thrift/package-summary.html#package_description[Thrift API Javadoc]. +Representational State Transfer (REST) was introduced in 2000 in the doctoral +dissertation of Roy Fielding, one of the principal authors of the HTTP specification. -== REST +REST itself is out of the scope of this documentation, but in general, REST allows +client-server interactions via an API that is tied to the URL itself. This section +discusses how to configure and run the REST server included with HBase, which exposes +HBase tables, rows, cells, and metadata as URL specified resources. +There is also a nice series of blogs on +link:http://blog.cloudera.com/blog/2013/03/how-to-use-the-apache-hbase-rest-interface-part-1/[How-to: Use the Apache HBase REST Interface] +by Jesse Anderson. + +=== Starting and Stopping the REST Server + +The included REST server can run as a daemon which starts an embedded Jetty +servlet container and deploys the servlet into it. Use one of the following commands +to start the REST server in the foreground or background. The port is optional, and +defaults to 8080. + +[source, bash] +---- +# Foreground +$ bin/hbase rest start -p <port> + +# Background, logging to a file in $HBASE_LOGS_DIR +$ bin/hbase-daemon.sh start rest -p <port> +---- + +To stop the REST server, use Ctrl-C if you were running it in the foreground, or the +following command if you were running it in the background. + +[source, bash] +---- +$ bin/hbase-daemon.sh stop rest +---- + +=== Configuring the REST Server and Client + +For information about configuring the REST server and client for SSL, as well as `doAs` +impersonation for the REST server, see <<security.gateway.thrift>> and other portions +of the <<security>> chapter. + +=== Using REST Endpoints + +The following examples use the placeholder server `http://example.com:8000`, and +the following commands can all be run using `curl` or `wget` commands. You can request +plain text (the default), XML , or JSON output by adding no header for plain text, +or the header "Accept: text/xml" for XML or "Accept: application/json" for JSON. + +NOTE: Unless specified, use `GET` requests for queries, `PUT` or `POST` requests for +creation or mutation, and `DELETE` for deletion. + +==== Cluster Information + +.HBase Version +---- +http://example.com:8000/version/cluster +---- + +.Cluster Status +---- +http://example.com:8000/status/cluster +---- + +.Table List +---- +http://example.com:8000/ +---- + +==== Table Information + +.Table Schema (GET) + +To retrieve the table schema, use a `GET` request with the `/schema` endpoint: +---- +http://example.com:8000/<table>/schema +---- + +.Table Creation +To create a table, use a `PUT` request with the `/schema` endpoint: +---- +http://example.com:8000/<table>/schema +---- + +.Table Schema Update +To update a table, use a `POST` request with the `/schema` endpoint: +---- +http://example.com:8000/<table>/schema +---- + +.Table Deletion +To delete a table, use a `DELETE` request with the `/schema` endpoint: +---- +http://example.com:8000<table>/schema +---- + +.Table Regions +---- +http://example.com:8000/<table>/regions +---- + + +==== Gets + +.GET a Single Cell Value + +To get a single cell value, use a URL scheme like the following: + +---- +http://example.com:8000<table>/<row>/<column>:<qualifier>/<timestamp>/content:raw +---- + +The column qualifier and timestamp are optional. Without them, the whole row will +be returned, or the newest version will be returned. + +.Multiple Single Values (Multi-Get) + +To get multiple single values, specify multiple column:qualifier tuples and/or a start-timestamp +and end-timestamp. You can also limit the number of versions. -Currently most of the documentation on REST exists in the link:http://wiki.apache.org/hadoop/Hbase/Stargate[Apache HBase Wiki on REST] (The REST gateway used to be called 'Stargate'). There are also a nice set of blogs on link:http://blog.cloudera.com/blog/2013/03/how-to-use-the-apache-hbase-rest-interface-part-1/[How-to: Use the Apache HBase REST Interface] by Jesse Anderson. +---- +http://example.com:8000<table>/<row>/<column>:<qualifier>?v=<num-versions> +---- + +.Globbing Rows +To scan a series of rows, you can use a `*` glob +character on the <row> value to glob together multiple rows. + +---- +http://example.com:8000urls/https|ad.doubleclick.net|* +---- + +==== Puts -To run your REST server under SSL, set `hbase.rest.ssl.enabled` to `true` and also set the following configs when you launch the REST server: (See example commands in <<jmx_config,JMX config>>) +For Puts, `PUT` and `POST` are equivalent. -[source] +.Put a Single Value +The column qualifier and the timestamp are optional. + +---- +http://example.com:8000put/<table>/<row>/<column>:<qualifier>/<timestamp> +http://example.com:8000test/testrow/test:testcolumn ---- -hbase.rest.ssl.keystore.store -hbase.rest.ssl.keystore.password -hbase.rest.ssl.keystore.keypassword + +.Put Multiple Values +To put multiple values, use a false row key. Row, column, and timestamp values in +the supplied cells override the specifications on the path, allowing you to post +multiple values to a table in batch. The HTTP response code indicates the status of +the put. Set the `Content-Type` to `text/xml` for XML encoding or to `application/x-protobuf` +for protobufs encoding. Supply the commit data in the `PUT` or `POST` body, using +the <<xml_schema>> and <<protobufs_schema>> as guidelines. + +==== Scans + +`PUT` and `POST` are equivalent for scans. + +.Scanner Creation +To create a scanner, use the `/scanner` endpoint. The HTTP response code indicates +success (201) or failure (anything else), and on successful scanner creation, the +URI is returned which should be used to address the scanner. + +---- +http://example.com:8000<table>/scanner +---- + +.Scanner Get Next +To get the next batch of cells found by the scanner, use the `/scanner/<scanner-id>' +endpoint, using the URI returned by the scanner creation endpoint. If the scanner +is exhausted, HTTP status `204` is returned. +---- +http://example.com:8000<table>/scanner/<scanner-id> +---- + +.Scanner Deletion +To delete resources associated with a scanner, send a HTTP `DELETE` request to the +`/scanner/<scanner-id>` endpoint. +---- +http://example.com:8000<table>/scanner/<scanner-id> +---- + +[[xml_schema]] +=== REST XML Schema + +[source,xml] +---- +<schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="RESTSchema"> + + <element name="Version" type="tns:Version"></element> + + <complexType name="Version"> + <attribute name="REST" type="string"></attribute> + <attribute name="JVM" type="string"></attribute> + <attribute name="OS" type="string"></attribute> + <attribute name="Server" type="string"></attribute> + <attribute name="Jersey" type="string"></attribute> + </complexType> + + <element name="TableList" type="tns:TableList"></element> + + <complexType name="TableList"> + <sequence> + <element name="table" type="tns:Table" maxOccurs="unbounded" minOccurs="1"></element> + </sequence> + </complexType> + + <complexType name="Table"> + <sequence> + <element name="name" type="string"></element> + </sequence> + </complexType> + + <element name="TableInfo" type="tns:TableInfo"></element> + + <complexType name="TableInfo"> + <sequence> + <element name="region" type="tns:TableRegion" maxOccurs="unbounded" minOccurs="1"></element> + </sequence> + <attribute name="name" type="string"></attribute> + </complexType> + + <complexType name="TableRegion"> + <attribute name="name" type="string"></attribute> + <attribute name="id" type="int"></attribute> + <attribute name="startKey" type="base64Binary"></attribute> + <attribute name="endKey" type="base64Binary"></attribute> + <attribute name="location" type="string"></attribute> + </complexType> + + <element name="TableSchema" type="tns:TableSchema"></element> + + <complexType name="TableSchema"> + <sequence> + <element name="column" type="tns:ColumnSchema" maxOccurs="unbounded" minOccurs="1"></element> + </sequence> + <attribute name="name" type="string"></attribute> + <anyAttribute></anyAttribute> + </complexType> + + <complexType name="ColumnSchema"> + <attribute name="name" type="string"></attribute> + <anyAttribute></anyAttribute> + </complexType> + + <element name="CellSet" type="tns:CellSet"></element> + + <complexType name="CellSet"> + <sequence> + <element name="row" type="tns:Row" maxOccurs="unbounded" minOccurs="1"></element> + </sequence> + </complexType> + + <element name="Row" type="tns:Row"></element> + + <complexType name="Row"> + <sequence> + <element name="key" type="base64Binary"></element> + <element name="cell" type="tns:Cell" maxOccurs="unbounded" minOccurs="1"></element> + </sequence> + </complexType> + + <element name="Cell" type="tns:Cell"></element> + + <complexType name="Cell"> + <sequence> + <element name="value" maxOccurs="1" minOccurs="1"> + <simpleType><restriction base="base64Binary"> + </simpleType> + </element> + </sequence> + <attribute name="column" type="base64Binary" /> + <attribute name="timestamp" type="int" /> + </complexType> + + <element name="Scanner" type="tns:Scanner"></element> + + <complexType name="Scanner"> + <sequence> + <element name="column" type="base64Binary" minOccurs="0" maxOccurs="unbounded"></element> + </sequence> + <sequence> + <element name="filter" type="string" minOccurs="0" maxOccurs="1"></element> + </sequence> + <attribute name="startRow" type="base64Binary"></attribute> + <attribute name="endRow" type="base64Binary"></attribute> + <attribute name="batch" type="int"></attribute> + <attribute name="startTime" type="int"></attribute> + <attribute name="endTime" type="int"></attribute> + </complexType> + + <element name="StorageClusterVersion" type="tns:StorageClusterVersion" /> + + <complexType name="StorageClusterVersion"> + <attribute name="version" type="string"></attribute> + </complexType> + + <element name="StorageClusterStatus" + type="tns:StorageClusterStatus"> + </element> + + <complexType name="StorageClusterStatus"> + <sequence> + <element name="liveNode" type="tns:Node" + maxOccurs="unbounded" minOccurs="0"> + </element> + <element name="deadNode" type="string" maxOccurs="unbounded" + minOccurs="0"> + </element> + </sequence> + <attribute name="regions" type="int"></attribute> + <attribute name="requests" type="int"></attribute> + <attribute name="averageLoad" type="float"></attribute> + </complexType> + + <complexType name="Node"> + <sequence> + <element name="region" type="tns:Region" + maxOccurs="unbounded" minOccurs="0"> + </element> + </sequence> + <attribute name="name" type="string"></attribute> + <attribute name="startCode" type="int"></attribute> + <attribute name="requests" type="int"></attribute> + <attribute name="heapSizeMB" type="int"></attribute> + <attribute name="maxHeapSizeMB" type="int"></attribute> + </complexType> + + <complexType name="Region"> + <attribute name="name" type="base64Binary"></attribute> + <attribute name="stores" type="int"></attribute> + <attribute name="storefiles" type="int"></attribute> + <attribute name="storefileSizeMB" type="int"></attribute> + <attribute name="memstoreSizeMB" type="int"></attribute> + <attribute name="storefileIndexSizeMB" type="int"></attribute> + </complexType> + +</schema> ---- -HBase ships a simple REST client, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/rest/client/package-summary.html[REST client] package for details. -To enable SSL support for it, please also import your certificate into local java cacerts keystore: +[[protobufs_schema]] +=== REST Protobufs Schema + +[source,json] ---- -keytool -import -trustcacerts -file /home/user/restserver.cert -keystore $JAVA_HOME/jre/lib/security/cacerts +message Version { + optional string restVersion = 1; + optional string jvmVersion = 2; + optional string osVersion = 3; + optional string serverVersion = 4; + optional string jerseyVersion = 5; +} + +message StorageClusterStatus { + message Region { + required bytes name = 1; + optional int32 stores = 2; + optional int32 storefiles = 3; + optional int32 storefileSizeMB = 4; + optional int32 memstoreSizeMB = 5; + optional int32 storefileIndexSizeMB = 6; + } + message Node { + required string name = 1; // name:port + optional int64 startCode = 2; + optional int32 requests = 3; + optional int32 heapSizeMB = 4; + optional int32 maxHeapSizeMB = 5; + repeated Region regions = 6; + } + // node status + repeated Node liveNodes = 1; + repeated string deadNodes = 2; + // summary statistics + optional int32 regions = 3; + optional int32 requests = 4; + optional double averageLoad = 5; +} + +message TableList { + repeated string name = 1; +} + +message TableInfo { + required string name = 1; + message Region { + required string name = 1; + optional bytes startKey = 2; + optional bytes endKey = 3; + optional int64 id = 4; + optional string location = 5; + } + repeated Region regions = 2; +} + +message TableSchema { + optional string name = 1; + message Attribute { + required string name = 1; + required string value = 2; + } + repeated Attribute attrs = 2; + repeated ColumnSchema columns = 3; + // optional helpful encodings of commonly used attributes + optional bool inMemory = 4; + optional bool readOnly = 5; +} + +message ColumnSchema { + optional string name = 1; + message Attribute { + required string name = 1; + required string value = 2; + } + repeated Attribute attrs = 2; + // optional helpful encodings of commonly used attributes + optional int32 ttl = 3; + optional int32 maxVersions = 4; + optional string compression = 5; +} + +message Cell { + optional bytes row = 1; // unused if Cell is in a CellSet + optional bytes column = 2; + optional int64 timestamp = 3; + optional bytes data = 4; +} + +message CellSet { + message Row { + required bytes key = 1; + repeated Cell values = 2; + } + repeated Row rows = 1; +} + +message Scanner { + optional bytes startRow = 1; + optional bytes endRow = 2; + repeated bytes columns = 3; + optional int32 batch = 4; + optional int64 startTime = 5; + optional int64 endTime = 6; +} ---- == Thrift @@ -64,3 +486,331 @@ Documentation about Thrift has moved to <<thrift>>. FB's Chip Turner wrote a pure C/C++ client. link:https://github.com/facebook/native-cpp-hbase-client[Check it out]. + +[[jdo]] + +== Using Java Data Objects (JDO) with HBase + +link:https://db.apache.org/jdo/[Java Data Objects (JDO)] is a standard way to +access persistent data in databases, using plain old Java objects (POJO) to +represent persistent data. + +.Dependencies +This code example has the following dependencies: + +. HBase 0.90.x or newer +. commons-beanutils.jar (http://commons.apache.org/) +. commons-pool-1.5.5.jar (http://commons.apache.org/) +. transactional-tableindexed for HBase 0.90 (https://github.com/hbase-trx/hbase-transactional-tableindexed) + +.Download `hbase-jdo` +Download the code from http://code.google.com/p/hbase-jdo/. + +.JDO Example +==== + +This example uses JDO to create a table and an index, insert a row into a table, get +a row, get a column value, perform a query, and do some additional HBase operations. + +[source, java] +---- +package com.apache.hadoop.hbase.client.jdo.examples; + +import java.io.File; +import java.io.FileInputStream; +import java.io.InputStream; +import java.util.Hashtable; + +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.client.tableindexed.IndexedTable; + +import com.apache.hadoop.hbase.client.jdo.AbstractHBaseDBO; +import com.apache.hadoop.hbase.client.jdo.HBaseBigFile; +import com.apache.hadoop.hbase.client.jdo.HBaseDBOImpl; +import com.apache.hadoop.hbase.client.jdo.query.DeleteQuery; +import com.apache.hadoop.hbase.client.jdo.query.HBaseOrder; +import com.apache.hadoop.hbase.client.jdo.query.HBaseParam; +import com.apache.hadoop.hbase.client.jdo.query.InsertQuery; +import com.apache.hadoop.hbase.client.jdo.query.QSearch; +import com.apache.hadoop.hbase.client.jdo.query.SelectQuery; +import com.apache.hadoop.hbase.client.jdo.query.UpdateQuery; + +/** + * Hbase JDO Example. + * + * dependency library. + * - commons-beanutils.jar + * - commons-pool-1.5.5.jar + * - hbase0.90.0-transactionl.jar + * + * you can expand Delete,Select,Update,Insert Query classes. + * + */ +public class HBaseExample { + public static void main(String[] args) throws Exception { + AbstractHBaseDBO dbo = new HBaseDBOImpl(); + + //*drop if table is already exist.* + if(dbo.isTableExist("user")){ + dbo.deleteTable("user"); + } + + //*create table* + dbo.createTableIfNotExist("user",HBaseOrder.DESC,"account"); + //dbo.createTableIfNotExist("user",HBaseOrder.ASC,"account"); + + //create index. + String[] cols={"id","name"}; + dbo.addIndexExistingTable("user","account",cols); + + //insert + InsertQuery insert = dbo.createInsertQuery("user"); + UserBean bean = new UserBean(); + bean.setFamily("account"); + bean.setAge(20); + bean.setEmail("[email protected]"); + bean.setId("ncanis"); + bean.setName("ncanis"); + bean.setPassword("1111"); + insert.insert(bean); + + //select 1 row + SelectQuery select = dbo.createSelectQuery("user"); + UserBean resultBean = (UserBean)select.select(bean.getRow(),UserBean.class); + + // select column value. + String value = (String)select.selectColumn(bean.getRow(),"account","id",String.class); + + // search with option (QSearch has EQUAL, NOT_EQUAL, LIKE) + // select id,password,name,email from account where id='ncanis' limit startRow,20 + HBaseParam param = new HBaseParam(); + param.setPage(bean.getRow(),20); + param.addColumn("id","password","name","email"); + param.addSearchOption("id","ncanis",QSearch.EQUAL); + select.search("account", param, UserBean.class); + + // search column value is existing. + boolean isExist = select.existColumnValue("account","id","ncanis".getBytes()); + + // update password. + UpdateQuery update = dbo.createUpdateQuery("user"); + Hashtable<String, byte[]> colsTable = new Hashtable<String, byte[]>(); + colsTable.put("password","2222".getBytes()); + update.update(bean.getRow(),"account",colsTable); + + //delete + DeleteQuery delete = dbo.createDeleteQuery("user"); + delete.deleteRow(resultBean.getRow()); + + //////////////////////////////////// + // etc + + // HTable pool with apache commons pool + // borrow and release. HBasePoolManager(maxActive, minIdle etc..) + IndexedTable table = dbo.getPool().borrow("user"); + dbo.getPool().release(table); + + // upload bigFile by hadoop directly. + HBaseBigFile bigFile = new HBaseBigFile(); + File file = new File("doc/movie.avi"); + FileInputStream fis = new FileInputStream(file); + Path rootPath = new Path("/files/"); + String filename = "movie.avi"; + bigFile.uploadFile(rootPath,filename,fis,true); + + // receive file stream from hadoop. + Path p = new Path(rootPath,filename); + InputStream is = bigFile.path2Stream(p,4096); + + } +} +---- +==== + +[[scala]] +== Scala + +=== Setting the Classpath + +To use Scala with HBase, your CLASSPATH must include HBase's classpath as well as +the Scala JARs required by your code. First, use the following command on a server +running the HBase RegionServer process, to get HBase's classpath. + +[source, bash] +---- +$ ps aux |grep regionserver| awk -F 'java.library.path=' {'print $2'} | awk {'print $1'} + +/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64 +---- + +Set the `$CLASSPATH` environment variable to include the path you found in the previous +step, plus the path of `scala-library.jar` and each additional Scala-related JAR needed for +your project. + +[source, bash] +---- +$ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64:/path/to/scala-library.jar +---- + +=== Scala SBT File + +Your `build.sbt` file needs the following `resolvers` and `libraryDependencies` to work +with HBase. + +---- +resolvers += "Apache HBase" at "https://repository.apache.org/content/repositories/releases" + +resolvers += "Thrift" at "http://people.apache.org/~rawson/repo/" + +libraryDependencies ++= Seq( + "org.apache.hadoop" % "hadoop-core" % "0.20.2", + "org.apache.hbase" % "hbase" % "0.90.4" +) +---- + +=== Example Scala Code + +This example lists HBase tables, creates a new table, and adds a row to it. + +[source, scala] +---- +import org.apache.hadoop.hbase.HBaseConfiguration +import org.apache.hadoop.hbase.client.{Connection,ConnectionFactory,HBaseAdmin,HTable,Put,Get} +import org.apache.hadoop.hbase.util.Bytes + + +val conf = new HBaseConfiguration() +val connection = ConnectionFactory.createConnection(conf); +val admin = connection.getAdmin(); + +// list the tables +val listtables=admin.listTables() +listtables.foreach(println) + +// let's insert some data in 'mytable' and get the row + +val table = new HTable(conf, "mytable") + +val theput= new Put(Bytes.toBytes("rowkey1")) + +theput.add(Bytes.toBytes("ids"),Bytes.toBytes("id1"),Bytes.toBytes("one")) +table.put(theput) + +val theget= new Get(Bytes.toBytes("rowkey1")) +val result=table.get(theget) +val value=result.value() +println(Bytes.toString(value)) +---- + +[[jython]] +== Jython + + +=== Setting the Classpath + +To use Jython with HBase, your CLASSPATH must include HBase's classpath as well as +the Jython JARs required by your code. First, use the following command on a server +running the HBase RegionServer process, to get HBase's classpath. + +[source, bash] +---- +$ ps aux |grep regionserver| awk -F 'java.library.path=' {'print $2'} | awk {'print $1'} + +/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64 +---- + +Set the `$CLASSPATH` environment variable to include the path you found in the previous +step, plus the path to `jython.jar` and each additional Jython-related JAR needed for +your project. + +[source, bash] +---- +$ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64:/path/to/jython.jar +---- + +Start a Jython shell with HBase and Hadoop JARs in the classpath: +$ bin/hbase org.python.util.jython + +=== Jython Code Examples + +.Table Creation, Population, Get, and Delete with Jython +==== +The following Jython code example creates a table, populates it with data, fetches +the data, and deletes the table. + +[source,jython] +---- +import java.lang +from org.apache.hadoop.hbase import HBaseConfiguration, HTableDescriptor, HColumnDescriptor, HConstants +from org.apache.hadoop.hbase.client import HBaseAdmin, HTable, Get +from org.apache.hadoop.hbase.io import Cell, RowResult + +# First get a conf object. This will read in the configuration +# that is out in your hbase-*.xml files such as location of the +# hbase master node. +conf = HBaseConfiguration() + +# Create a table named 'test' that has two column families, +# one named 'content, and the other 'anchor'. The colons +# are required for column family names. +tablename = "test" + +desc = HTableDescriptor(tablename) +desc.addFamily(HColumnDescriptor("content:")) +desc.addFamily(HColumnDescriptor("anchor:")) +admin = HBaseAdmin(conf) + +# Drop and recreate if it exists +if admin.tableExists(tablename): + admin.disableTable(tablename) + admin.deleteTable(tablename) +admin.createTable(desc) + +tables = admin.listTables() +table = HTable(conf, tablename) + +# Add content to 'column:' on a row named 'row_x' +row = 'row_x' +update = Get(row) +update.put('content:', 'some content') +table.commit(update) + +# Now fetch the content just added, returns a byte[] +data_row = table.get(row, "content:") +data = java.lang.String(data_row.value, "UTF8") + +print "The fetched row contains the value '%s'" % data + +# Delete the table. +admin.disableTable(desc.getName()) +admin.deleteTable(desc.getName()) +---- +==== + +.Table Scan Using Jython +==== +This example scans a table and returns the results that match a given family qualifier. + +[source, jython] +---- +# Print all rows that are members of a particular column family +# by passing a regex for family qualifier + +import java.lang + +from org.apache.hadoop.hbase import HBaseConfiguration +from org.apache.hadoop.hbase.client import HTable + +conf = HBaseConfiguration() + +table = HTable(conf, "wiki") +col = "title:.*$" + +scanner = table.getScanner([col], "") +while 1: + result = scanner.next() + if not result: + break + print java.lang.String(result.row), java.lang.String(result.get('title:').value) +---- +==== \ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hbase/blob/3e8ede1d/src/main/asciidoc/_chapters/mapreduce.adoc ---------------------------------------------------------------------- diff --git a/src/main/asciidoc/_chapters/mapreduce.adoc b/src/main/asciidoc/_chapters/mapreduce.adoc index 2a42af2..1337c79 100644 --- a/src/main/asciidoc/_chapters/mapreduce.adoc +++ b/src/main/asciidoc/_chapters/mapreduce.adoc @@ -33,7 +33,9 @@ A good place to get started with MapReduce is http://hadoop.apache.org/docs/r2.6 MapReduce version 2 (MR2)is now part of link:http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/[YARN]. This chapter discusses specific configuration steps you need to take to use MapReduce on data within HBase. -In addition, it discusses other interactions and issues between HBase and MapReduce jobs. +In addition, it discusses other interactions and issues between HBase and MapReduce +jobs. Finally, it discusses <<cascading,Cascading>>, an +link:http://www.cascading.org/[alternative API] for MapReduce. .`mapred` and `mapreduce` [NOTE] @@ -594,3 +596,50 @@ This can either be done on a per-Job basis through properties, on on the entire Especially for longer running jobs, speculative execution will create duplicate map-tasks which will double-write your data to HBase; this is probably not what you want. See <<spec.ex,spec.ex>> for more information. + +[[cascading]] +== Cascading + +link:http://www.cascading.org/[Cascading] is an alternative API for MapReduce, which +actually uses MapReduce, but allows you to write your MapReduce code in a simplified +way. + +The following example shows a Cascading `Flow` which "sinks" data into an HBase cluster. The same +`hBaseTap` API could be used to "source" data as well. + +[source, java] +---- +// read data from the default filesystem +// emits two fields: "offset" and "line" +Tap source = new Hfs( new TextLine(), inputFileLhs ); + +// store data in a HBase cluster +// accepts fields "num", "lower", and "upper" +// will automatically scope incoming fields to their proper familyname, "left" or "right" +Fields keyFields = new Fields( "num" ); +String[] familyNames = {"left", "right"}; +Fields[] valueFields = new Fields[] {new Fields( "lower" ), new Fields( "upper" ) }; +Tap hBaseTap = new HBaseTap( "multitable", new HBaseScheme( keyFields, familyNames, valueFields ), SinkMode.REPLACE ); + +// a simple pipe assembly to parse the input into fields +// a real app would likely chain multiple Pipes together for more complex processing +Pipe parsePipe = new Each( "insert", new Fields( "line" ), new RegexSplitter( new Fields( "num", "lower", "upper" ), " " ) ); + +// "plan" a cluster executable Flow +// this connects the source Tap and hBaseTap (the sink Tap) to the parsePipe +Flow parseFlow = new FlowConnector( properties ).connect( source, hBaseTap, parsePipe ); + +// start the flow, and block until complete +parseFlow.complete(); + +// open an iterator on the HBase table we stuffed data into +TupleEntryIterator iterator = parseFlow.openSink(); + +while(iterator.hasNext()) + { + // print out each tuple from HBase + System.out.println( "iterator.next() = " + iterator.next() ); + } + +iterator.close(); +---- http://git-wip-us.apache.org/repos/asf/hbase/blob/3e8ede1d/src/main/asciidoc/_chapters/ops_mgt.adoc ---------------------------------------------------------------------- diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc index af99215..c5f52f5 100644 --- a/src/main/asciidoc/_chapters/ops_mgt.adoc +++ b/src/main/asciidoc/_chapters/ops_mgt.adoc @@ -637,10 +637,14 @@ See link:https://issues.apache.org/jira/browse/HBASE-4391[HBASE-4391 Add ability [[compaction.tool]] === Offline Compaction Tool -See the usage for the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html[Compaction - Tool]. -Run it like this +./bin/hbase - org.apache.hadoop.hbase.regionserver.CompactionTool+ +See the usage for the +link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html[CompactionTool]. +Run it like: + +[source, bash] +---- +$ ./bin/hbase org.apache.hadoop.hbase.regionserver.CompactionTool +---- === `hbase clean` @@ -1252,7 +1256,8 @@ Have a look in the Web UI. == Cluster Replication -NOTE: This information was previously available at link:http://hbase.apache.org/replication.html[Cluster Replication]. +NOTE: This information was previously available at +link:http://hbase.apache.org#replication[Cluster Replication]. HBase provides a cluster replication mechanism which allows you to keep one cluster's state synchronized with that of another cluster, using the write-ahead log (WAL) of the source cluster to propagate the changes. Some use cases for cluster replication include: http://git-wip-us.apache.org/repos/asf/hbase/blob/3e8ede1d/src/main/asciidoc/_chapters/performance.adoc ---------------------------------------------------------------------- diff --git a/src/main/asciidoc/_chapters/performance.adoc b/src/main/asciidoc/_chapters/performance.adoc index 90ee4bf..bf0e790 100644 --- a/src/main/asciidoc/_chapters/performance.adoc +++ b/src/main/asciidoc/_chapters/performance.adoc @@ -109,7 +109,7 @@ The link:http://en.wikipedia.org/wiki/CAP_theorem[CAP Theorem] states that a dis HBase favors consistency and partition tolerance, where a decision has to be made. Coda Hale explains why partition tolerance is so important, in http://codahale.com/you-cant-sacrifice-partition-tolerance/. -Robert Yokota used an automated testing framework called link:https://aphyr.com/tags/jepsen[Jepson] to test HBase's partition tolerance in the face of network partitions, using techniques modeled after Aphyr's link:https://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions[Call Me Maybe] series. The results, available as a link:http://old.eng.yammer.com/call-me-maybe-hbase/[blog post] and an link:http://old.eng.yammer.com/call-me-maybe-hbase-addendum/[addendum], show that HBase performs correctly. +Robert Yokota used an automated testing framework called link:https://aphyr.com/tags/jepsen[Jepson] to test HBase's partition tolerance in the face of network partitions, using techniques modeled after Aphyr's link:https://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions[Call Me Maybe] series. The results, available as a link:https://rayokota.wordpress.com/2015/09/30/call-me-maybe-hbase/[blog post] and an link:https://rayokota.wordpress.com/2015/09/30/call-me-maybe-hbase-addendum/[addendum], show that HBase performs correctly. [[jvm]] == Java @@ -196,7 +196,8 @@ tableDesc.addFamily(cfDesc); ---- ==== -See the API documentation for link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig]. +See the API documentation for +link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig]. [[perf.rs.memstore.size]] === `hbase.regionserver.global.memstore.size` @@ -676,7 +677,7 @@ Enabling Bloom Filters can save your having to go to disk and can help improve r link:http://en.wikipedia.org/wiki/Bloom_filter[Bloom filters] were developed over in link:https://issues.apache.org/jira/browse/HBASE-1200[HBase-1200 Add bloomfilters]. For description of the development process -- why static blooms rather than dynamic -- and for an overview of the unique properties that pertain to blooms in HBase, as well as possible future directions, see the _Development Process_ section of the document link:https://issues.apache.org/jira/secure/attachment/12444007/Bloom_Filters_in_HBase.pdf[BloomFilters in HBase] attached to link:https://issues.apache.org/jira/browse/HBASE-1200[HBASE-1200]. The bloom filters described here are actually version two of blooms in HBase. -In versions up to 0.19.x, HBase had a dynamic bloom option based on work done by the link:http://www.one-lab.org[European Commission One-Lab Project 034819]. +In versions up to 0.19.x, HBase had a dynamic bloom option based on work done by the link:http://www.one-lab.org/[European Commission One-Lab Project 034819]. The core of the HBase bloom work was later pulled up into Hadoop to implement org.apache.hadoop.io.BloomMapFile. Version 1 of HBase blooms never worked that well. Version 2 is a rewrite from scratch though again it starts with the one-lab work. http://git-wip-us.apache.org/repos/asf/hbase/blob/3e8ede1d/src/main/asciidoc/_chapters/preface.adoc ---------------------------------------------------------------------- diff --git a/src/main/asciidoc/_chapters/preface.adoc b/src/main/asciidoc/_chapters/preface.adoc index 960fcc4..50df7ff 100644 --- a/src/main/asciidoc/_chapters/preface.adoc +++ b/src/main/asciidoc/_chapters/preface.adoc @@ -29,20 +29,29 @@ This is the official reference guide for the link:http://hbase.apache.org/[HBase] version it ships with. -Herein you will find either the definitive documentation on an HBase topic as of its standing when the referenced HBase version shipped, or it will point to the location in link:http://hbase.apache.org/apidocs/index.html[Javadoc], link:https://issues.apache.org/jira/browse/HBASE[JIRA] or link:http://wiki.apache.org/hadoop/Hbase[wiki] where the pertinent information can be found. +Herein you will find either the definitive documentation on an HBase topic as of its +standing when the referenced HBase version shipped, or it will point to the location +in link:http://hbase.apache.org/apidocs/index.html[Javadoc] or +link:https://issues.apache.org/jira/browse/HBASE[JIRA] where the pertinent information can be found. .About This Guide -This reference guide is a work in progress. The source for this guide can be found in the _src/main/asciidoc directory of the HBase source. This reference guide is marked up using link:http://asciidoc.org/[AsciiDoc] from which the finished guide is generated as part of the 'site' build target. Run +This reference guide is a work in progress. The source for this guide can be found in the +_src/main/asciidoc directory of the HBase source. This reference guide is marked up +using link:http://asciidoc.org/[AsciiDoc] from which the finished guide is generated as part of the +'site' build target. Run [source,bourne] ---- mvn site ---- to generate this documentation. Amendments and improvements to the documentation are welcomed. -Click link:https://issues.apache.org/jira/secure/CreateIssueDetails!init.jspa?pid=12310753&issuetype=1&components=12312132&summary=SHORT+DESCRIPTION[this link] to file a new documentation bug against Apache HBase with some values pre-selected. +Click +link:https://issues.apache.org/jira/secure/CreateIssueDetails!init.jspa?pid=12310753&issuetype=1&components=12312132&summary=SHORT+DESCRIPTION[this link] +to file a new documentation bug against Apache HBase with some values pre-selected. .Contributing to the Documentation -For an overview of AsciiDoc and suggestions to get started contributing to the documentation, see the <<appendix_contributing_to_documentation,relevant section later in this documentation>>. +For an overview of AsciiDoc and suggestions to get started contributing to the documentation, +see the <<appendix_contributing_to_documentation,relevant section later in this documentation>>. .Heads-up if this is your first foray into the world of distributed computing... If this is your first foray into the wonderful world of Distributed Computing, then you are in for some interesting times. @@ -57,7 +66,7 @@ Yours, the HBase Community. .Reporting Bugs -Please use link:https://issues.apache.org/jira/browse/hbase[JIRA] to report non-security-related bugs. +Please use link:https://issues.apache.org/jira/browse/hbase[JIRA] to report non-security-related bugs. To protect existing HBase installations from new vulnerabilities, please *do not* use JIRA to report security-related bugs. Instead, send your report to the mailing list [email protected], which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report. http://git-wip-us.apache.org/repos/asf/hbase/blob/3e8ede1d/src/main/asciidoc/_chapters/security.adoc ---------------------------------------------------------------------- diff --git a/src/main/asciidoc/_chapters/security.adoc b/src/main/asciidoc/_chapters/security.adoc index 3d9082c..fb2a6b0 100644 --- a/src/main/asciidoc/_chapters/security.adoc +++ b/src/main/asciidoc/_chapters/security.adoc @@ -42,7 +42,7 @@ HBase provides mechanisms to secure various components and aspects of HBase and == Using Secure HTTP (HTTPS) for the Web UI A default HBase install uses insecure HTTP connections for Web UIs for the master and region servers. -To enable secure HTTP (HTTPS) connections instead, set `hadoop.ssl.enabled` to `true` in _hbase-site.xml_. +To enable secure HTTP (HTTPS) connections instead, set `hbase.ssl.enabled` to `true` in _hbase-site.xml_. This does not change the port used by the Web UI. To change the port for the web UI for a given HBase component, configure that port's setting in hbase-site.xml. These settings are: @@ -522,21 +522,21 @@ This is future work. Secure HBase requires secure ZooKeeper and HDFS so that users cannot access and/or modify the metadata and data from under HBase. HBase uses HDFS (or configured file system) to keep its data files as well as write ahead logs (WALs) and other data. HBase uses ZooKeeper to store some metadata for operations (master address, table locks, recovery state, etc). === Securing ZooKeeper Data -ZooKeeper has a pluggable authentication mechanism to enable access from clients using different methods. ZooKeeper even allows authenticated and un-authenticated clients at the same time. The access to znodes can be restricted by providing Access Control Lists (ACLs) per znode. An ACL contains two components, the authentication method and the principal. ACLs are NOT enforced hierarchically. See link:https://zookeeper.apache.org/doc/r3.3.6/zookeeperProgrammers.html#sc_ZooKeeperPluggableAuthentication[ZooKeeper Programmers Guide] for details. +ZooKeeper has a pluggable authentication mechanism to enable access from clients using different methods. ZooKeeper even allows authenticated and un-authenticated clients at the same time. The access to znodes can be restricted by providing Access Control Lists (ACLs) per znode. An ACL contains two components, the authentication method and the principal. ACLs are NOT enforced hierarchically. See link:https://zookeeper.apache.org/doc/r3.3.6/zookeeperProgrammers.html#sc_ZooKeeperPluggableAuthentication[ZooKeeper Programmers Guide] for details. -HBase daemons authenticate to ZooKeeper via SASL and kerberos (See <<zk.sasl.auth>>). HBase sets up the znode ACLs so that only the HBase user and the configured hbase superuser (`hbase.superuser`) can access and modify the data. In cases where ZooKeeper is used for service discovery or sharing state with the client, the znodes created by HBase will also allow anyone (regardless of authentication) to read these znodes (clusterId, master address, meta location, etc), but only the HBase user can modify them. +HBase daemons authenticate to ZooKeeper via SASL and kerberos (See <<zk.sasl.auth>>). HBase sets up the znode ACLs so that only the HBase user and the configured hbase superuser (`hbase.superuser`) can access and modify the data. In cases where ZooKeeper is used for service discovery or sharing state with the client, the znodes created by HBase will also allow anyone (regardless of authentication) to read these znodes (clusterId, master address, meta location, etc), but only the HBase user can modify them. === Securing File System (HDFS) Data -All of the data under management is kept under the root directory in the file system (`hbase.rootdir`). Access to the data and WAL files in the filesystem should be restricted so that users cannot bypass the HBase layer, and peek at the underlying data files from the file system. HBase assumes the filesystem used (HDFS or other) enforces permissions hierarchically. If sufficient protection from the file system (both authorization and authentication) is not provided, HBase level authorization control (ACLs, visibility labels, etc) is meaningless since the user can always access the data from the file system. +All of the data under management is kept under the root directory in the file system (`hbase.rootdir`). Access to the data and WAL files in the filesystem should be restricted so that users cannot bypass the HBase layer, and peek at the underlying data files from the file system. HBase assumes the filesystem used (HDFS or other) enforces permissions hierarchically. If sufficient protection from the file system (both authorization and authentication) is not provided, HBase level authorization control (ACLs, visibility labels, etc) is meaningless since the user can always access the data from the file system. HBase enforces the posix-like permissions 700 (`rwx------`) to its root directory. It means that only the HBase user can read or write the files in FS. The default setting can be changed by configuring `hbase.rootdir.perms` in hbase-site.xml. A restart of the active master is needed so that it changes the used permissions. For versions before 1.2.0, you can check whether HBASE-13780 is committed, and if not, you can manually set the permissions for the root directory if needed. Using HDFS, the command would be: [source,bash] ---- sudo -u hdfs hadoop fs -chmod 700 /hbase ---- -You should change `/hbase` if you are using a different `hbase.rootdir`. +You should change `/hbase` if you are using a different `hbase.rootdir`. -In secure mode, SecureBulkLoadEndpoint should be configured and used for properly handing of users files created from MR jobs to the HBase daemons and HBase user. The staging directory in the distributed file system used for bulk load (`hbase.bulkload.staging.dir`, defaults to `/tmp/hbase-staging`) should have (mode 711, or `rwx--x--x`) so that users can access the staging directory created under that parent directory, but cannot do any other operation. See <<hbase.secure.bulkload>> for how to configure SecureBulkLoadEndPoint. +In secure mode, SecureBulkLoadEndpoint should be configured and used for properly handing of users files created from MR jobs to the HBase daemons and HBase user. The staging directory in the distributed file system used for bulk load (`hbase.bulkload.staging.dir`, defaults to `/tmp/hbase-staging`) should have (mode 711, or `rwx--x--x`) so that users can access the staging directory created under that parent directory, but cannot do any other operation. See <<hbase.secure.bulkload>> for how to configure SecureBulkLoadEndPoint. == Securing Access To Your Data @@ -1334,7 +1334,7 @@ static Table createTableAndWriteDataWithLabels(TableName tableName, String... la <<reading_cells_with_labels>> ==== Reading Cells with Labels -When you issue a Scan or Get, HBase uses your default set of authorizations to filter out cells that you do not have access to. A superuser can set the default set of authorizations for a given user by using the `set_auths` HBase Shell command or the link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityClient.html#setAuths(org.apache.hadoop.conf.Configuration,%20java.lang.String\[\],%20java.lang.String)[VisibilityClient.setAuths()] method. +When you issue a Scan or Get, HBase uses your default set of authorizations to filter out cells that you do not have access to. A superuser can set the default set of authorizations for a given user by using the `set_auths` HBase Shell command or the link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityClient.html#setAuths(org.apache.hadoop.hbase.client.Connection,%20java.lang.String[],%20java.lang.String)[VisibilityClient.setAuths()] method. You can specify a different authorization during the Scan or Get, by passing the AUTHORIZATIONS option in HBase Shell, or the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setAuthorizations%28org.apache.hadoop.hbase.security.visibility.Authorizations%29[setAuthorizations()] method if you use the API. This authorization will be combined with your default set as an additional filter. It will further filter your results, rather than giving you additional authorization. @@ -1582,7 +1582,8 @@ Rotate the Master Key:: === Secure Bulk Load Bulk loading in secure mode is a bit more involved than normal setup, since the client has to transfer the ownership of the files generated from the MapReduce job to HBase. -Secure bulk loading is implemented by a coprocessor, named link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.html[SecureBulkLoadEndpoint], which uses a staging directory configured by the configuration property `hbase.bulkload.staging.dir`, which defaults to _/tmp/hbase-staging/_. +Secure bulk loading is implemented by a coprocessor, named link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.html +[SecureBulkLoadEndpoint], which uses a staging directory configured by the configuration property `hbase.bulkload.staging.dir`, which defaults to _/tmp/hbase-staging/_. .Secure Bulk Load Algorithm http://git-wip-us.apache.org/repos/asf/hbase/blob/3e8ede1d/src/main/asciidoc/_chapters/thrift_filter_language.adoc ---------------------------------------------------------------------- diff --git a/src/main/asciidoc/_chapters/thrift_filter_language.adoc b/src/main/asciidoc/_chapters/thrift_filter_language.adoc index 744cec6..da36cea 100644 --- a/src/main/asciidoc/_chapters/thrift_filter_language.adoc +++ b/src/main/asciidoc/_chapters/thrift_filter_language.adoc @@ -31,7 +31,6 @@ Apache link:http://thrift.apache.org/[Thrift] is a cross-platform, cross-language development framework. HBase includes a Thrift API and filter language. The Thrift API relies on client and server processes. -Documentation about the HBase Thrift API is located at http://wiki.apache.org/hadoop/Hbase/ThriftApi. You can configure Thrift for secure authentication at the server and client side, by following the procedures in <<security.client.thrift>> and <<security.gateway.thrift>>. @@ -250,7 +249,7 @@ RowFilter:: Family Filter:: This filter takes a compare operator and a comparator. - It compares each qualifier name with the comparator using the compare operator and if the comparison returns true, it returns all the key-values in that column. + It compares each column family name with the comparator using the compare operator and if the comparison returns true, it returns all the Cells in that column family. QualifierFilter:: This filter takes a compare operator and a comparator. http://git-wip-us.apache.org/repos/asf/hbase/blob/3e8ede1d/src/main/asciidoc/_chapters/upgrading.adoc ---------------------------------------------------------------------- diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc index 9742654..13c3c0e 100644 --- a/src/main/asciidoc/_chapters/upgrading.adoc +++ b/src/main/asciidoc/_chapters/upgrading.adoc @@ -92,7 +92,7 @@ In addition to the usual API versioning considerations HBase has other compatibi .Operational Compatibility * Metric changes * Behavioral changes of services -* Web page APIs +* JMX APIs exposed via the `/jmx/` endpoint .Summary * A patch upgrade is a drop-in replacement. Any change that is not Java binary compatible would not be allowed.footnote:[See http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html.]. Downgrading versions within patch releases may not be compatible. @@ -192,6 +192,12 @@ See <<zookeeper.requirements>>. .HBase Default Ports Changed The ports used by HBase changed. They used to be in the 600XX range. In HBase 1.0.0 they have been moved up out of the ephemeral port range and are 160XX instead (Master web UI was 60010 and is now 16010; the RegionServer web UI was 60030 and is now 16030, etc.). If you want to keep the old port locations, copy the port setting configs from _hbase-default.xml_ into _hbase-site.xml_, change them back to the old values from the HBase 0.98.x era, and ensure you've distributed your configurations before you restart. +.HBase Master Port Binding Change +In HBase 1.0.x, the HBase Master binds the RegionServer ports as well as the Master +ports. This behavior is changed from HBase versions prior to 1.0. In HBase 1.1 and 2.0 branches, +this behavior is reverted to the pre-1.0 behavior of the HBase master not binding the RegionServer +ports. + [[upgrade1.0.hbase.bucketcache.percentage.in.combinedcache]] .hbase.bucketcache.percentage.in.combinedcache configuration has been REMOVED You may have made use of this configuration if you are using BucketCache. If NOT using BucketCache, this change does not effect you. Its removal means that your L1 LruBlockCache is now sized using `hfile.block.cache.size` -- i.e. the way you would size the on-heap L1 LruBlockCache if you were NOT doing BucketCache -- and the BucketCache size is not whatever the setting for `hbase.bucketcache.size` is. You may need to adjust configs to get the LruBlockCache and BucketCache sizes set to what they were in 0.98.x and previous. If you did not set this config., its default value was 0.9. If you do nothing, your BucketCache will increase in size by 10%. Your L1 LruBlockCache will become `hfile.block.cache.size` times your java heap size (`hfile.block.cache.size` is a float between 0.0 and 1.0). To read more, see link:https://issues.apache.org/jira/browse/HBASE-11520[HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache"].
