Author: tdsilva
Date: Thu Apr 30 00:35:06 2015
New Revision: 1676881

URL: http://svn.apache.org/r1676881
Log:
Added configs to use high priority queues for Index and Metadata rpc calls. 
Fixed CREATE TABLE examples.

Modified:
    phoenix/site/publish/language/index.html
    phoenix/site/publish/secondary_indexing.html
    phoenix/site/source/src/site/markdown/secondary_indexing.md

Modified: phoenix/site/publish/language/index.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/index.html?rev=1676881&r1=1676880&r2=1676881&view=diff
==============================================================================
--- phoenix/site/publish/language/index.html (original)
+++ phoenix/site/publish/language/index.html Thu Apr 30 00:35:06 2015
@@ -549,7 +549,7 @@ syntax-end -->
 <p>Creates a new table. The <code>HBase</code> table and any column families 
referenced are created if they don&#39;t already exist. All table, column 
family and column names are uppercased unless they are double quoted in which 
case they are case sensitive. Column families that exist in the 
<code>HBase</code> table but are not listed are ignored. At create time, to 
improve query performance, an empty key value is added to the first column 
family of any existing rows or the default column family if no column families 
are explicitly defined. Upserts will also add this empty key value. This 
improves query performance by having a key value column we can guarantee always 
being there and thus minimizing the amount of data that must be projected and 
subsequently returned back to the client. <code>HBase</code> table and column 
configuration options may be passed through as key/value pairs to configure the 
<code>HBase</code> table as desired. Note that when using the <code>IF NOT 
EXISTS</co
 de> clause, if a table already exists, then no change will be made to it. 
Additionally, no validation is done to check whether the existing table 
metadata matches the proposed table metadata. so it&#39;s better to use 
<code>DROP TABLE</code> followed by <code>CREATE TABLE</code> is the table 
metadata may be changing.</p>
 <p>Example:</p>
 <p class="notranslate">
-CREATE TABLE my_schema.my_table ( id BIGINT not null primary key, date DATE 
not null)<br />CREATE TABLE my_table ( id INTEGER not null primary key desc, 
date DATE not null,<br />&nbsp;&nbsp;&nbsp;&nbsp;m.db_utilization DECIMAL, 
i.db_utilization)<br 
/>&nbsp;&nbsp;&nbsp;&nbsp;m.DATA_BLOCK_ENCODING=&#39;DIFF&#39;<br />CREATE 
TABLE stats.prod_metrics ( host char(50) not null, created_date date not 
null,<br />&nbsp;&nbsp;&nbsp;&nbsp;txn_count bigint CONSTRAINT pk PRIMARY KEY 
(host, created_date) )<br />CREATE TABLE IF NOT EXISTS 
&quot;my_case_sensitive_table&quot;<br />&nbsp;&nbsp;&nbsp;&nbsp;( 
&quot;id&quot; char(10) not null primary key, &quot;value&quot; integer)<br 
/>&nbsp;&nbsp;&nbsp;&nbsp;DATA_BLOCK_ENCODING=&#39;NONE&#39;,VERSIONS=5,MAX_FILESIZE=2000000
 split on (?, ?, ?)<br />CREATE TABLE IF NOT EXISTS my_schema.my_table (<br 
/>&nbsp;&nbsp;&nbsp;&nbsp;org_id CHAR(15), entity_id CHAR(15), payload 
binary(1000),<br />&nbsp;&nbsp;&nbsp;&nbsp;CONSTRAINT pk PRIMARY KEY (org_id, 
entity_
 id) )<br />&nbsp;&nbsp;&nbsp;&nbsp;TTL=86400</p>
+CREATE TABLE my_schema.my_table ( id BIGINT not null primary key, date 
DATE)<br />CREATE TABLE my_table ( id INTEGER not null primary key desc, date 
DATE,<br />&nbsp;&nbsp;&nbsp;&nbsp;m.db_utilization DECIMAL, 
i.db_utilization)<br 
/>&nbsp;&nbsp;&nbsp;&nbsp;m.DATA_BLOCK_ENCODING=&#39;DIFF&#39;<br />CREATE 
TABLE stats.prod_metrics ( host char(50) not null, created_date date not 
null,<br />&nbsp;&nbsp;&nbsp;&nbsp;txn_count bigint CONSTRAINT pk PRIMARY KEY 
(host, created_date) )<br />CREATE TABLE IF NOT EXISTS 
&quot;my_case_sensitive_table&quot;<br />&nbsp;&nbsp;&nbsp;&nbsp;( 
&quot;id&quot; char(10) not null primary key, &quot;value&quot; integer)<br 
/>&nbsp;&nbsp;&nbsp;&nbsp;DATA_BLOCK_ENCODING=&#39;NONE&#39;,VERSIONS=5,MAX_FILESIZE=2000000
 split on (?, ?, ?)<br />CREATE TABLE IF NOT EXISTS my_schema.my_table (<br 
/>&nbsp;&nbsp;&nbsp;&nbsp;org_id CHAR(15), entity_id CHAR(15), payload 
binary(1000),<br />&nbsp;&nbsp;&nbsp;&nbsp;CONSTRAINT pk PRIMARY KEY (org_id, 
entity_id) )<br />&nbsp;&
 nbsp;&nbsp;&nbsp;TTL=86400</p>
 
 <h3 id="drop_table" class="notranslate">DROP TABLE</h3>
 <!-- railroad-start -->

Modified: phoenix/site/publish/secondary_indexing.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/secondary_indexing.html?rev=1676881&r1=1676880&r2=1676881&view=diff
==============================================================================
--- phoenix/site/publish/secondary_indexing.html (original)
+++ phoenix/site/publish/secondary_indexing.html Thu Apr 30 00:35:06 2015
@@ -286,15 +286,23 @@ CREATE LOCAL INDEX my_index ON my_table
   &lt;name&gt;hbase.regionserver.wal.codec&lt;/name&gt;
   
&lt;value&gt;org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec&lt;/value&gt;
 &lt;/property&gt;
-&lt;property&gt;
+</pre> 
+ </div> 
+ <p>The above property enables custom WAL edits to be written, ensuring proper 
writing/replay of the index updates. This codec supports the usual host of 
WALEdit options, most notably WALEdit compression.</p> 
+ <div class="source"> 
+  <pre>&lt;property&gt;
   &lt;name&gt;hbase.region.server.rpc.scheduler.factory.class&lt;/name&gt;
-  
&lt;value&gt;org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory&lt;/value&gt;
-  &lt;description&gt;Factory to create the Phoenix RPC Scheduler that knows to 
put index updates into index queues&lt;/description&gt;
+  
&lt;value&gt;org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory&lt;/value&gt;
+  &lt;description&gt;Factory to create the Phoenix RPC Scheduler that uses 
separate queues for index and metadata rpc calls&lt;/description&gt;
+&lt;/property&gt;
+&lt;property&gt;
+  &lt;name&gt;hbase.rpc.controllerfactory.class&lt;/name&gt;
+  
&lt;value&gt;org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory&lt;/value&gt;
+  &lt;description&gt;Factory to create the Phoenix RPC Controller that sets 
the priority for index and metadata rpc calls&lt;/description&gt;
 &lt;/property&gt;
 </pre> 
  </div> 
- <p>The first property enables custom WAL edits to be written, ensuring proper 
writing/replay of the index updates. This codec supports the usual host of 
WALEdit options, most notably WALEdit compression.</p> 
- <p>The second property prevents deadlocks from occurring during index 
maintenance for global indexes (HBase 0.98.4+ only) by ensuring index updates 
are processed with a higher priority than data updates.</p> 
+ <p>The above properties prevent deadlocks from occurring during index 
maintenance for global indexes (HBase 0.98.4+ and Phoenix 4.3.1+ only) by 
ensuring index updates are processed with a higher priority than data updates. 
It also prevents deadlocks by ensuring metadata rpc calls are processed with a 
higher priority than data rpc calls.</p> 
  <p>Local indexing also requires special configurations in the master to 
ensure data table and local index regions co-location.</p> 
  <p>You will need to add the following parameters to <tt>hbase-site.xml</tt> 
on the master:</p> 
  <div class="source"> 

Modified: phoenix/site/source/src/site/markdown/secondary_indexing.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/secondary_indexing.md?rev=1676881&r1=1676880&r2=1676881&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/secondary_indexing.md (original)
+++ phoenix/site/source/src/site/markdown/secondary_indexing.md Thu Apr 30 
00:35:06 2015
@@ -146,16 +146,24 @@ You will need to add the following param
   <name>hbase.regionserver.wal.codec</name>
   <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
 </property>
+```
+
+The above property enables custom WAL edits to be written, ensuring proper 
writing/replay of the index updates. This codec supports the usual host of 
WALEdit options, most notably WALEdit compression.
+
+```
 <property>
   <name>hbase.region.server.rpc.scheduler.factory.class</name>
-  
<value>org.apache.phoenix.hbase.index.ipc.PhoenixIndexRpcSchedulerFactory</value>
-  <description>Factory to create the Phoenix RPC Scheduler that knows to put 
index updates into index queues</description>
+  <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
+  <description>Factory to create the Phoenix RPC Scheduler that uses separate 
queues for index and metadata updates</description>
+</property>
+<property>
+  <name>hbase.rpc.controllerfactory.class</name>
+  
<value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value>
+  <description>Factory to create the Phoenix RPC Scheduler that uses separate 
queues for index and metadata updates</description>
 </property>
 ```
 
-The first property enables custom WAL edits to be written, ensuring proper 
writing/replay of the index updates. This codec supports the usual host of 
WALEdit options, most notably WALEdit compression.
-
-The second property prevents deadlocks from occurring during index maintenance 
for global indexes (HBase 0.98.4+ only) by ensuring index updates are processed 
with a higher priority than data updates.
+The above properties prevent deadlocks from occurring during index maintenance 
for global indexes (HBase 0.98.4+ and Phoenix 4.3.1+ only) by ensuring index 
updates are processed with a higher priority than data updates. It also 
prevents deadlocks by ensuring metadata rpc calls are processed with a higher 
priority than data rpc calls.
 
 Local indexing also requires special configurations in the master to ensure 
data table and local index regions co-location.
 


Reply via email to