Repository: hbase
Updated Branches:
  refs/heads/master 62deb8172 -> 7da47509d


http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/index.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/index.xml b/src/site/xdoc/index.xml
new file mode 100644
index 0000000..1848d40
--- /dev/null
+++ b/src/site/xdoc/index.xml
@@ -0,0 +1,109 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title>Apache HBase&#8482; Home</title>
+    <link rel="shortcut icon" href="/images/favicon.ico" />
+  </properties>
+
+  <body>
+    <section name="Welcome to Apache HBase&#8482;">
+        <p><a href="http://www.apache.org/";>Apache</a> HBase&#8482; is the <a 
href="http://hadoop.apache.org/";>Hadoop</a> database, a distributed, scalable, 
big data store.
+    </p>
+    <p>Use Apache HBase&#8482; when you need random, realtime read/write 
access to your Big Data.
+    This project's goal is the hosting of very large tables -- billions of 
rows X millions of columns -- atop clusters of commodity hardware.
+Apache HBase is an open-source, distributed, versioned, non-relational 
database modeled after Google's <a 
href="http://research.google.com/archive/bigtable.html";>Bigtable: A Distributed 
Storage System for Structured Data</a> by Chang et al.
+ Just as Bigtable leverages the distributed data storage provided by the 
Google File System, Apache HBase provides Bigtable-like capabilities on top of 
Hadoop and HDFS.
+    </p>
+  </section>
+    <section name="Download">
+    <p>
+    Click <b><a 
href="http://www.apache.org/dyn/closer.cgi/hbase/";>here</a></b> to download 
Apache HBase&#8482;.
+    </p>
+    </section>
+    <section name="Features">
+    <p>
+<ul>
+    <li>Linear and modular scalability.
+</li>
+    <li>Strictly consistent reads and writes.
+</li>
+    <li>Automatic and configurable sharding of tables
+</li>
+    <li>Automatic failover support between RegionServers.
+</li>
+    <li>Convenient base classes for backing Hadoop MapReduce jobs with Apache 
HBase tables.
+</li>
+    <li>Easy to use Java API for client access.
+</li>
+    <li>Block cache and Bloom Filters for real-time queries.
+</li>
+    <li>Query predicate push down via server side Filters
+</li>
+    <li>Thrift gateway and a REST-ful Web service that supports XML, Protobuf, 
and binary data encoding options
+</li>
+    <li>Extensible jruby-based (JIRB) shell
+</li>
+    <li>Support for exporting metrics via the Hadoop metrics subsystem to 
files or Ganglia; or via JMX
+</li>
+</ul>
+</p>
+</section>
+     <section name="More Info">
+   <p>See the <a 
href="http://hbase.apache.org/book.html#arch.overview";>Architecture 
Overview</a>, the <a href="http://hbase.apache.org/book.html#faq";>Apache HBase 
Reference Guide FAQ</a>,
+    and the other documentation links.
+   </p>
+   <dl>
+     <dt>Export Control</dt>
+   <dd><p>The HBase distribution includes cryptographic software. See the 
export control notice <a href="export_control.html">here</a>
+   </p></dd>
+     <dt>Code Of Conduct</dt>
+   <dd><p>We expect participants in discussions on the HBase project mailing 
lists, Slack and IRC channels, and JIRA issues to abide by the Apache Software 
Foundation's <a href="http://apache.org/foundation/policies/conduct.html";>Code 
of Conduct</a>. More information can be found <a href="coc.html">here</a>.
+   </p></dd>
+ </dl>
+</section>
+
+     <section name="News">
+       <p>August 4th, 2017 <a 
href="https://easychair.org/cfp/HBaseConAsia2017";>HBaseCon Asia 2017</a> @ the 
Huawei Campus in Shenzhen, China</p>
+       <p>June 12th, 2017 <a 
href="https://easychair.org/cfp/hbasecon2017";>HBaseCon2017</a> at the 
Crittenden Buildings on the Google Mountain View Campus</p>
+       <p>April 25th, 2017 <a 
href="https://www.meetup.com/hbaseusergroup/events/239291716/";>Meetup</a> @ 
Visa in Palo Alto</p>
+        <p>December 8th, 2016 <a 
href="https://www.meetup.com/hbaseusergroup/events/235542241/";>Meetup@Splice</a>
 in San Francisco</p>
+       <p>September 26th, 2016 <a 
href="http://www.meetup.com/HBase-NYC/events/233024937/";>HBaseConEast2016</a> 
at Google in Chelsea, NYC</p>
+         <p>May 24th, 2016 <a href="http://www.hbasecon.com/";>HBaseCon2016</a> 
at The Village, 969 Market, San Francisco</p>
+       <p>June 25th, 2015 <a href="http://www.zusaar.com/event/14057003";>HBase 
Summer Meetup 2015</a> in Tokyo</p>
+       <p>May 7th, 2015 <a href="http://hbasecon.com/";>HBaseCon2015</a> in San 
Francisco</p>
+       <p>February 17th, 2015 <a 
href="http://www.meetup.com/hbaseusergroup/events/219260093/";>HBase meetup 
around Strata+Hadoop World</a> in San Jose</p>
+       <p>January 15th, 2015 <a 
href="http://www.meetup.com/hbaseusergroup/events/218744798/";>HBase meetup @ 
AppDynamics</a> in San Francisco</p>
+       <p>November 20th, 2014 <a 
href="http://www.meetup.com/hbaseusergroup/events/205219992/";>HBase meetup @ 
WANdisco</a> in San Ramon</p>
+       <p>October 27th, 2014 <a 
href="http://www.meetup.com/hbaseusergroup/events/207386102/";>HBase Meetup @ 
Apple</a> in Cupertino</p>
+       <p>October 15th, 2014 <a 
href="http://www.meetup.com/HBase-NYC/events/207655552/";>HBase Meetup @ 
Google</a> on the night before Strata/HW in NYC</p>
+       <p>September 25th, 2014 <a 
href="http://www.meetup.com/hbaseusergroup/events/203173692/";>HBase Meetup @ 
Continuuity</a> in Palo Alto</p>
+         <p>August 28th, 2014 <a 
href="http://www.meetup.com/hbaseusergroup/events/197773762/";>HBase Meetup @ 
Sift Science</a> in San Francisco</p>
+         <p>July 17th, 2014 <a 
href="http://www.meetup.com/hbaseusergroup/events/190994082/";>HBase Meetup @ 
HP</a> in Sunnyvale</p>
+         <p>June 5th, 2014 <a 
href="http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/";>HBase
 BOF at Hadoop Summit</a>, San Jose Convention Center</p>
+         <p>May 5th, 2014 <a href="http://www.hbasecon.com/";>HBaseCon2014</a> 
at the Hilton San Francisco on Union Square</p>
+         <p>March 12th, 2014 <a 
href="http://www.meetup.com/hbaseusergroup/events/160757912/";>HBase Meetup @ 
Ancestry.com</a> in San Francisco</p>
+      <p><small><a href="old_news.html">Old News</a></small></p>
+    </section>
+  </body>
+
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/metrics.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/metrics.xml b/src/site/xdoc/metrics.xml
new file mode 100644
index 0000000..f3ab7d7
--- /dev/null
+++ b/src/site/xdoc/metrics.xml
@@ -0,0 +1,150 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title> 
+      Apache HBase (TM) Metrics
+    </title>
+  </properties>
+
+  <body>
+    <section name="Introduction">
+      <p>
+      Apache HBase (TM) emits Hadoop <a 
href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html";>metrics</a>.
+      </p>
+      </section>
+      <section name="Setup">
+      <p>First read up on Hadoop <a 
href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html";>metrics</a>.
+      If you are using ganglia, the <a 
href="http://wiki.apache.org/hadoop/GangliaMetrics";>GangliaMetrics</a>
+      wiki page is useful read.</p>
+      <p>To have HBase emit metrics, edit 
<code>$HBASE_HOME/conf/hadoop-metrics.properties</code>
+      and enable metric 'contexts' per plugin.  As of this writing, hadoop 
supports
+      <strong>file</strong> and <strong>ganglia</strong> plugins.
+      Yes, the hbase metrics files is named hadoop-metrics rather than
+      <em>hbase-metrics</em> because currently at least the hadoop metrics 
system has the
+      properties filename hardcoded. Per metrics <em>context</em>,
+      comment out the NullContext and enable one or more plugins instead.
+      </p>
+      <p>
+      If you enable the <em>hbase</em> context, on regionservers you'll see 
total requests since last
+      metric emission, count of regions and storefiles as well as a count of 
memstore size.
+      On the master, you'll see a count of the cluster's requests.
+      </p>
+      <p>
+      Enabling the <em>rpc</em> context is good if you are interested in seeing
+      metrics on each hbase rpc method invocation (counts and time taken).
+      </p>
+      <p>
+      The <em>jvm</em> context is
+      useful for long-term stats on running hbase jvms -- memory used, thread 
counts, etc.
+      As of this writing, if more than one jvm is running emitting metrics, at 
least
+      in ganglia, the stats are aggregated rather than reported per instance.
+      </p>
+    </section>
+
+    <section name="Using with JMX">
+      <p>
+      In addition to the standard output contexts supported by the Hadoop 
+      metrics package, you can also export HBase metrics via Java Management 
+      Extensions (JMX).  This will allow viewing HBase stats in JConsole or 
+      any other JMX client.
+      </p>
+      <section name="Enable HBase stats collection">
+      <p>
+      To enable JMX support in HBase, first edit 
+      <code>$HBASE_HOME/conf/hadoop-metrics.properties</code> to support 
+      metrics refreshing. (If you've running 0.94.1 and above, or have already 
configured 
+      <code>hadoop-metrics.properties</code> for another output context,
+      you can skip this step).
+      </p>
+      <source>
+# Configuration of the "hbase" context for null
+hbase.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+hbase.period=60
+
+# Configuration of the "jvm" context for null
+jvm.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+jvm.period=60
+
+# Configuration of the "rpc" context for null
+rpc.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+rpc.period=60
+      </source>
+      </section>
+      <section name="Setup JMX remote access">
+      <p>
+      For remote access, you will need to configure JMX remote passwords 
+      and access profiles.  Create the files:
+      </p>
+      <dl>
+        <dt><code>$HBASE_HOME/conf/jmxremote.passwd</code> (set permissions 
+        to 600)</dt>
+        <dd>
+        <source>
+monitorRole monitorpass
+controlRole controlpass
+        </source>
+        </dd>
+        
+        <dt><code>$HBASE_HOME/conf/jmxremote.access</code></dt>
+        <dd>
+        <source>
+monitorRole readonly
+controlRole readwrite
+        </source>
+        </dd>
+      </dl>
+      </section>
+      <section name="Configure JMX in HBase startup">
+      <p>
+      Finally, edit the <code>$HBASE_HOME/conf/hbase-env.sh</code>
+      script to add JMX support: 
+      </p>
+      <dl>
+        <dt><code>$HBASE_HOME/conf/hbase-env.sh</code></dt>
+        <dd>
+        <p>Add the lines:</p>
+        <source>
+HBASE_JMX_OPTS="-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.ssl=false"
+HBASE_JMX_OPTS="$HBASE_JMX_OPTS 
-Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd"
+HBASE_JMX_OPTS="$HBASE_JMX_OPTS 
-Dcom.sun.management.jmxremote.access.file=$HBASE_HOME/conf/jmxremote.access"
+
+export HBASE_MASTER_OPTS="$HBASE_JMX_OPTS 
-Dcom.sun.management.jmxremote.port=10101"
+export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_OPTS 
-Dcom.sun.management.jmxremote.port=10102"
+        </source>
+        </dd>
+      </dl>
+      <p>
+      After restarting the processes you want to monitor, you should now be 
+      able to run JConsole (included with the JDK since JDK 5.0) to view 
+      the statistics via JMX.  HBase MBeans are exported under the 
+      <strong><code>hadoop</code></strong> domain in JMX.
+      </p>
+      </section>
+      <section name="Understanding HBase Metrics">
+      <p>
+      For more information on understanding HBase metrics, see the <a 
href="book.html#hbase_metrics">metrics section</a> in the Apache HBase 
Reference Guide. 
+      </p>
+      </section>
+    </section>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/old_news.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/old_news.xml b/src/site/xdoc/old_news.xml
new file mode 100644
index 0000000..94e1882
--- /dev/null
+++ b/src/site/xdoc/old_news.xml
@@ -0,0 +1,92 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd";>
+
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title>
+      Old Apache HBase (TM) News
+    </title>
+  </properties>
+  <body>
+  <section name="Old News">
+         <p>February 10th, 2014 <a 
href="http://www.meetup.com/hbaseusergroup/events/163139322/";>HBase Meetup @ 
Continuuity</a> in Palo Alto</p>
+         <p>January 30th, 2014 <a 
href="http://www.meetup.com/hbaseusergroup/events/158491762/";>HBase Meetup @ 
Apple</a> in Cupertino</p>
+         <p>January 30th, 2014 <a 
href="http://www.meetup.com/Los-Angeles-HBase-User-group/events/160560282/";>Los 
Angeles HBase User Group</a> in El Segundo</p>
+         <p>October 24th, 2013 <a 
href="http://www.meetup.com/hbaseusergroup/events/140759692/";>HBase User and <a 
href="http://www.meetup.com/hackathon/events/144366512/";>Developer</a> Meetup 
at HortonWorks</a>.in Palo Alto</p>
+         <p>September 26, 2013 <a 
href="http://www.meetup.com/hbaseusergroup/events/135862292/";>HBase Meetup at 
Arista Networks</a>.in San Francisco</p>
+         <p>August 20th, 2013 <a 
href="http://www.meetup.com/hbaseusergroup/events/120534362/";>HBase Meetup at 
Flurry</a>.in San Francisco</p>
+         <p>July 16th, 2013 <a 
href="http://www.meetup.com/hbaseusergroup/events/119929152/";>HBase Meetup at 
Twitter</a>.in San Francisco</p>
+         <p>June 25th, 2013 <a 
href="http://www.meetup.com/hbaseusergroup/events/119154442/";>Hadoop Summit 
Meetup</a>.at San Jose Convention Center</p>
+         <p>June 14th, 2013 <a href="http://kijicon.eventbrite.com/";>KijiCon: 
Building Big Data Apps</a> in San Francisco.</p>
+         <p>June 13th, 2013 <a 
href="http://www.hbasecon.com/";>HBaseCon2013</a> in San Francisco.  Submit an 
Abstract!</p>
+         <p>June 12th, 2013 <a 
href="http://www.meetup.com/hackathon/events/123403802/";>HBaseConHackAthon</a> 
at the Cloudera office in San Francisco.</p>
+         <p>April 11th, 2013 <a 
href="http://www.meetup.com/hbaseusergroup/events/103587852/";>HBase Meetup at 
AdRoll</a> in San Francisco</p>
+         <p>February 28th, 2013 <a 
href="http://www.meetup.com/hbaseusergroup/events/96584102/";>HBase Meetup at 
Intel Mission Campus</a></p>
+         <p>February 19th, 2013 <a 
href="http://www.meetup.com/hackathon/events/103633042/";>Developers PowWow</a> 
at HortonWorks' new digs</p>
+         <p>January 23rd, 2013 <a 
href="http://www.meetup.com/hbaseusergroup/events/91381312/";>HBase Meetup at 
WibiData World HQ!</a></p>
+            <p>December 4th, 2012 <a 
href="http://www.meetup.com/hackathon/events/90536432/";>0.96 Bug Squashing and 
Testing Hackathon</a> at Cloudera, SF.</p>
+            <p>October 29th, 2012 <a 
href="http://www.meetup.com/hbaseusergroup/events/82791572/";>HBase User Group 
Meetup</a> at Wize Commerce in San Mateo.</p>
+            <p>October 25th, 2012 <a 
href="http://www.meetup.com/HBase-NYC/events/81728932/";>Strata/Hadoop World 
HBase Meetup.</a> in NYC</p>
+            <p>September 11th, 2012 <a 
href="http://www.meetup.com/hbaseusergroup/events/80621872/";>Contributor's 
Pow-Wow at HortonWorks HQ.</a></p>
+            <p>August 8th, 2012 <a 
href="http://www.apache.org/dyn/closer.cgi/hbase/";>Apache HBase 0.94.1 is 
available for download</a></p>
+            <p>June 15th, 2012 <a 
href="http://www.meetup.com/hbaseusergroup/events/59829652/";>Birds-of-a-feather</a>
 in San Jose, day after <a href="http://hadoopsummit.org";>Hadoop Summit</a></p>
+            <p>May 23rd, 2012 <a 
href="http://www.meetup.com/hackathon/events/58953522/";>HackConAthon</a> in 
Palo Alto</p>
+            <p>May 22nd, 2012 <a 
href="http://www.hbasecon.com";>HBaseCon2012</a> in San Francisco</p>
+            <p>March 27th, 2012 <a 
href="http://www.meetup.com/hbaseusergroup/events/56021562/";>Meetup @ 
StumbleUpon</a> in San Francisco</p>
+
+            <p>January 19th, 2012 <a 
href="http://www.meetup.com/hbaseusergroup/events/46702842/";>Meetup @ 
EBay</a></p>
+            <p>January 23rd, 2012 Apache HBase 0.92.0 released. <a 
href="http://www.apache.org/dyn/closer.cgi/hbase/";>Download it!</a></p>
+            <p>December 23rd, 2011 Apache HBase 0.90.5 released. <a 
href="http://www.apache.org/dyn/closer.cgi/hbase/";>Download it!</a></p>
+            <p>November 29th, 2011 <a 
href="http://www.meetup.com/hackathon/events/41025972/";>Developer Pow-Wow in 
SF</a> at Salesforce HQ</p>
+            <p>November 7th, 2011 <a 
href="http://www.meetup.com/hbaseusergroup/events/35682812/";>HBase Meetup in 
NYC (6PM)</a> at the AppNexus office</p>
+            <p>August 22nd, 2011 <a 
href="http://www.meetup.com/hbaseusergroup/events/28518471/";>HBase Hackathon 
(11AM) and Meetup (6PM)</a> at FB in PA</p>
+            <p>June 30th, 2011 <a 
href="http://www.meetup.com/hbaseusergroup/events/20572251/";>HBase Contributor 
Day</a>, the day after the <a 
href="http://developer.yahoo.com/events/hadoopsummit2011/";>Hadoop Summit</a> 
hosted by Y!</p>
+            <p>June 8th, 2011 <a 
href="http://berlinbuzzwords.de/wiki/hbase-workshop-and-hackathon";>HBase 
Hackathon</a> in Berlin to coincide with <a 
href="http://berlinbuzzwords.de/";>Berlin Buzzwords</a></p>
+            <p>May 19th, 2011 Apache HBase 0.90.3 released. <a 
href="http://www.apache.org/dyn/closer.cgi/hbase/";>Download it!</a></p>
+            <p>April 12th, 2011 Apache HBase 0.90.2 released. <a 
href="http://www.apache.org/dyn/closer.cgi/hbase/";>Download it!</a></p>
+            <p>March 21st, <a 
href="http://www.meetup.com/hackathon/events/16770852/";>HBase 0.92 Hackathon at 
StumbleUpon, SF</a></p>
+            <p>February 22nd, <a 
href="http://www.meetup.com/hbaseusergroup/events/16492913/";>HUG12: February 
HBase User Group at StumbleUpon SF</a></p>
+            <p>December 13th, <a 
href="http://www.meetup.com/hackathon/calendar/15597555/";>HBase Hackathon: 
Coprocessor Edition</a></p>
+      <p>November 19th, <a href="http://huguk.org/";>Hadoop HUG in London</a> 
is all about Apache HBase</p>
+      <p>November 15-19th, <a 
href="http://www.devoxx.com/display/Devoxx2K10/Home";>Devoxx</a> features HBase 
Training and multiple HBase presentations</p>
+      <p>October 12th, HBase-related presentations by core contributors and 
users at <a 
href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/";>Hadoop 
World 2010</a></p>
+      <p>October 11th, <a 
href="http://www.meetup.com/hbaseusergroup/calendar/14606174/";>HUG-NYC: HBase 
User Group NYC Edition</a> (Night before Hadoop World)</p>
+      <p>June 30th, <a 
href="http://www.meetup.com/hbaseusergroup/calendar/13562846/";>Apache HBase 
Contributor Workshop</a> (Day after Hadoop Summit)</p>
+      <p>May 10th, 2010: Apache HBase graduates from Hadoop sub-project to 
Apache Top Level Project </p>
+      <p>Signup for <a 
href="http://www.meetup.com/hbaseusergroup/calendar/12689490/";>HBase User Group 
Meeting, HUG10</a> hosted by Trend Micro, April 19th, 2010</p>
+
+      <p><a 
href="http://www.meetup.com/hbaseusergroup/calendar/12689351/";>HBase User Group 
Meeting, HUG9</a> hosted by Mozilla, March 10th, 2010</p>
+      <p>Sign up for the <a 
href="http://www.meetup.com/hbaseusergroup/calendar/12241393/";>HBase User Group 
Meeting, HUG8</a>, January 27th, 2010 at StumbleUpon in SF</p>
+      <p>September 8th, 2010: Apache HBase 0.20.0 is faster, stronger, 
slimmer, and sweeter tasting than any previous Apache HBase release.  Get it 
off the <a href="http://www.apache.org/dyn/closer.cgi/hbase/";>Releases</a> 
page.</p>
+      <p><a href="http://dev.us.apachecon.com/c/acus2009/";>ApacheCon</a> in 
Oakland: November 2-6th, 2009:
+      The Apache Foundation will be celebrating its 10th anniversary in 
beautiful Oakland by the Bay. Lots of good talks and meetups including an HBase 
presentation by a couple of the lads.</p>
+      <p>HBase at Hadoop World in NYC: October 2nd, 2009: A few of us will be 
talking on Practical HBase out east at <a 
href="http://www.cloudera.com/hadoop-world-nyc";>Hadoop World: NYC</a>.</p>
+      <p>HUG7 and HBase Hackathon: August 7th-9th, 2009 at StumbleUpon in SF: 
Sign up for the <a 
href="http://www.meetup.com/hbaseusergroup/calendar/10950511/";>HBase User Group 
Meeting, HUG7</a> or for the <a 
href="http://www.meetup.com/hackathon/calendar/10951718/";>Hackathon</a> or for 
both (all are welcome!).</p>
+      <p>June, 2009 -- HBase at HadoopSummit2009 and at NOSQL: See the <a 
href="http://wiki.apache.org/hadoop/HBase/HBasePresentations";>presentations</a></p>
+      <p>March 3rd, 2009 -- HUG6: <a 
href="http://www.meetup.com/hbaseusergroup/calendar/9764004/";>HBase User Group 
6</a></p>
+      <p>January 30th, 2009 -- LA Hbackathon:<a 
href="http://www.meetup.com/hbasela/calendar/9450876/";>HBase January Hackathon 
Los Angeles</a> at <a href="http://streamy.com"; >Streamy</a> in Manhattan 
Beach</p>
+  </section>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/poweredbyhbase.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/poweredbyhbase.xml b/src/site/xdoc/poweredbyhbase.xml
new file mode 100644
index 0000000..ff1ba59
--- /dev/null
+++ b/src/site/xdoc/poweredbyhbase.xml
@@ -0,0 +1,398 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title>Powered By Apache HBase&#153;</title>
+  </properties>
+
+<body>
+<section name="Powered By Apache HBase&#153;">
+  <p>This page lists some institutions and projects which are using HBase. To
+    have your organization added, file a documentation JIRA or email
+    <a href="mailto:d...@hbase.apache.org";>hbase-dev</a> with the relevant
+    information. If you notice out-of-date information, use the same avenues to
+    report it.
+  </p>
+  <p><b>These items are user-submitted and the HBase team assumes no 
responsibility for their accuracy.</b></p>
+  <dl>
+  <dt><a href="http://www.adobe.com";>Adobe</a></dt>
+  <dd>We currently have about 30 nodes running HDFS, Hadoop and HBase  in 
clusters
+    ranging from 5 to 14 nodes on both production and development. We plan a
+    deployment on an 80 nodes cluster. We are using HBase in several areas from
+    social services to structured data and processing for internal use. We 
constantly
+    write data to HBase and run mapreduce jobs to process then store it back to
+    HBase or external systems. Our production cluster has been running since 
Oct 2008.</dd>
+
+  <dt><a 
href="http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase";>Project
 Astro</a></dt>
+  <dd>
+    Astro provides fast Spark SQL/DataFrame capabilities to HBase data,
+    featuring super-efficient access to multi-dimensional HBase rows through
+    native Spark execution in HBase coprocessor plus systematic and accurate
+    partition pruning and predicate pushdown from arbitrarily complex data
+    filtering logic. The batch load is optimized to run on the Spark execution
+    engine. Note that <a 
href="http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase";>Spark-SQL-on-HBase</a>
+    is the release site. Interested parties are free to make clones and claim
+    to be "latest(and active)", but they are not endorsed by the owner.
+  </dd>
+
+  <dt><a 
href="http://axibase.com/products/axibase-time-series-database/";>Axibase
+    Time Series Database (ATSD)</a></dt>
+  <dd>ATSD runs on top of HBase to collect, analyze and visualize time series
+    data at scale. ATSD capabilities include optimized storage schema, built-in
+    rule engine, forecasting algorithms (Holt-Winters and ARIMA) and 
next-generation
+    graphics designed for high-frequency data. Primary use cases: IT 
infrastructure
+    monitoring, data consolidation, operational historian in OPC 
environments.</dd>
+
+  <dt><a href="http://www.benipaltechnologies.com";>Benipal 
Technologies</a></dt>
+  <dd>We have a 35 node cluster used for HBase and Mapreduce with Lucene / SOLR
+    and katta integration to create and finetune our search databases. 
Currently,
+    our HBase installation has over 10 Billion rows with 100s of datapoints 
per row.
+    We compute over 10<sup>18</sup> calculations daily using MapReduce 
directly on HBase. We
+    heart HBase.</dd>
+
+  <dt><a href="https://github.com/ermanpattuk/BigSecret";>BigSecret</a></dt>
+  <dd>BigSecret is a security framework that is designed to secure Key-Value 
data,
+    while preserving efficient processing capabilities. It achieves cell-level
+    security, using combinations of different cryptographic techniques, in an
+    efficient and secure manner. It provides a wrapper library around 
HBase.</dd>
+
+  <dt><a href="http://caree.rs";>Caree.rs</a></dt>
+  <dd>Accelerated hiring platform for HiTech companies. We use HBase and Hadoop
+    for all aspects of our backend - job and company data storage, analytics
+    processing, machine learning algorithms for our hire recommendation engine.
+    Our live production site is directly served from HBase. We use cascading 
for
+    running offline data processing jobs.</dd>
+
+  <dt><a href="http://www.celer-tech.com/";>Celer Technologies</a></dt>
+  <dd>Celer Technologies is a global financial software company that creates
+    modular-based systems that have the flexibility to meet tomorrow's business
+    environment, today.  The Celer framework uses Hadoop/HBase for storing all
+    financial data for trading, risk, clearing in a single data store. With our
+    flexible framework and all the data in Hadoop/HBase, clients can build new
+    features to quickly extract data based on their trading, risk and clearing
+    activities from one single location.</dd>
+
+  <dt><a href="http://www.explorys.net";>Explorys</a></dt>
+  <dd>Explorys uses an HBase cluster containing over a billion anonymized 
clinical
+    records, to enable subscribers to search and analyze patient populations,
+    treatment protocols, and clinical outcomes.</dd>
+
+  <dt><a 
href="http://www.facebook.com/notes/facebook-engineering/the-underlying-technology-of-messages/454991608919";>Facebook</a></dt>
+  <dd>Facebook uses HBase to power their Messages infrastructure.</dd>
+
+  <dt><a href="http://www.filmweb.pl";>Filmweb</a></dt>
+  <dd>Filmweb is a film web portal with a large dataset of films, persons and
+    movie-related entities. We have just started a small cluster of 3 HBase 
nodes
+    to handle our web cache persistency layer. We plan to increase the cluster
+    size, and also to start migrating some of the data from our databases which
+    have some demanding scalability requirements.</dd>
+
+  <dt><a href="http://www.flurry.com";>Flurry</a></dt>
+  <dd>Flurry provides mobile application analytics. We use HBase and Hadoop for
+    all of our analytics processing, and serve all of our live requests 
directly
+    out of HBase on our 50 node production cluster with tens of billions of 
rows
+    over several tables.</dd>
+
+  <dt><a href="http://gumgum.com";>GumGum</a></dt>
+  <dd>GumGum is an In-Image Advertising Platform. We use HBase on an 15-node
+    Amazon EC2 High-CPU Extra Large (c1.xlarge) cluster for both real-time data
+    and analytics. Our production cluster has been running since June 
2010.</dd>
+
+  <dt><a href="http://helprace.com/help-desk/";>Helprace</a></dt>
+  <dd>Helprace is a customer service platform which uses Hadoop for analytics
+    and internal searching and filtering. Being on HBase we can share our HBase
+    and Hadoop cluster with other Hadoop processes - this particularly helps in
+    keeping community speeds up. We use Hadoop and HBase on small cluster with 
4
+    cores and 32 GB RAM each.</dd>
+
+  <dt><a href="http://hubspot.com";>HubSpot</a></dt>
+  <dd>HubSpot is an online marketing platform, providing analytics, email, and
+    segmentation of leads/contacts.  HBase is our primary datastore for our 
customers'
+    customer data, with multiple HBase clusters powering the majority of our
+    product.  We have nearly 200 regionservers across the various clusters, and
+    2 hadoop clusters also with nearly 200 tasktrackers.  We use c1.xlarge in 
EC2
+    for both, but are starting to move some of that to baremetal hardware.  
We've
+    been running HBase for over 2 years.</dd>
+
+  <dt><a href="http://www.infolinks.com/";>Infolinks</a></dt>
+  <dd>Infolinks is an In-Text ad provider. We use HBase to process 
advertisement
+    selection and user events for our In-Text ad network. The reports generated
+    from HBase are used as feedback for our production system to optimize ad
+    selection.</dd>
+
+  <dt><a href="http://www.kalooga.com";>Kalooga</a></dt>
+  <dd>Kalooga is a discovery service for image galleries. We use Hadoop, HBase
+    and Pig on a 20-node cluster for our crawling, analysis and events
+    processing.</dd>
+
+  <dt><a href="http://www.leanxcale.com/";>LeanXcale</a></dt>
+  <dd>LeanXcale provides an ultra-scalable transactional &amp; SQL database 
that
+  stores its data on HBase and it is able to scale to 1000s of nodes. It
+  also provides a standalone full ACID HBase with transactions across
+  arbitrary sets of rows and tables.</dd>
+
+
+  <dt><a href="http://www.mahalo.com";>Mahalo</a></dt>
+  <dd>Mahalo, "...the world's first human-powered search engine". All the 
markup
+    that powers the wiki is stored in HBase. It's been in use for a few months 
now.
+    MediaWiki - the same software that power Wikipedia - has version/revision 
control.
+    Mahalo's in-house editors produce a lot of revisions per day, which was not
+    working well in a RDBMS. An hbase-based solution for this was built and 
tested,
+    and the data migrated out of MySQL and into HBase. Right now it's at 
something
+    like 6 million items in HBase. The upload tool runs every hour from a shell
+    script to back up that data, and on 6 nodes takes about 5-10 minutes to 
run -
+    and does not slow down production at all.</dd>
+
+  <dt><a href="http://www.meetup.com";>Meetup</a></dt>
+  <dd>Meetup is on a mission to help the world’s people self-organize into 
local
+    groups.  We use Hadoop and HBase to power a site-wide, real-time activity
+    feed system for all of our members and groups.  Group activity is written
+    directly to HBase, and indexed per member, with the member's custom feed
+    served directly from HBase for incoming requests.  We're running HBase
+    0.20.0 on a 11 node cluster.</dd>
+
+  <dt><a href="http://www.mendeley.com";>Mendeley</a></dt>
+  <dd>Mendeley is creating a platform for researchers to collaborate and share
+    their research online. HBase is helping us to create the world's largest
+    research paper collection and is being used to store all our raw imported 
data.
+    We use a lot of map reduce jobs to process these papers into pages 
displayed
+    on the site. We also use HBase with Pig to do analytics and produce the 
article
+    statistics shown on the web site. You can find out more about how we use 
HBase
+    in the <a 
href="http://www.slideshare.net/danharvey/hbase-at-mendeley";>HBase
+    At Mendeley</a> slide presentation.</dd>
+
+  <dt><a href="http://www.ngdata.com";>NGDATA</a></dt>
+  <dd>NGDATA delivers <a 
href="http://www.ngdata.com/site/products/lily.html";>Lily</a>,
+    the consumer intelligence solution that delivers a unique combination of 
Big
+    Data management, machine learning technologies and consumer intelligence
+    applications in one integrated solution to allow better, and more dynamic,
+    consumer insights. Lily allows companies to process and analyze massive 
structured
+    and unstructured data, scale storage elastically and locate actionable data
+    quickly from large data sources in near real time.</dd>
+
+  <dt><a href="http://ning.com";>Ning</a></dt>
+  <dd>Ning uses HBase to store and serve the results of processing user events
+    and log files, which allows us to provide near-real time analytics and
+    reporting. We use a small cluster of commodity machines with 4 cores and 
16GB
+    of RAM per machine to handle all our analytics and reporting needs.</dd>
+
+  <dt><a href="http://www.worldcat.org";>OCLC</a></dt>
+  <dd>OCLC uses HBase as the main data store for WorldCat, a union catalog 
which
+    aggregates the collections of 72,000 libraries in 112 countries and 
territories.
+    WorldCat is currently comprised of nearly 1 billion records with nearly 2
+    billion library ownership indications. We're running a 50 Node HBase 
cluster
+    and a separate offline map-reduce cluster.</dd>
+
+  <dt><a href="http://olex.openlogic.com";>OpenLogic</a></dt>
+  <dd>OpenLogic stores all the world's Open Source packages, versions, files,
+    and lines of code in HBase for both near-real-time access and analytical
+    purposes. The production cluster has well over 100TB of disk spread across
+    nodes with 32GB+ RAM and dual-quad or dual-hex core CPU's.</dd>
+
+  <dt><a href="http://www.openplaces.org";>Openplaces</a></dt>
+  <dd>Openplaces is a search engine for travel that uses HBase to store 
terabytes
+    of web pages and travel-related entity records (countries, cities, hotels,
+    etc.). We have dozens of MapReduce jobs that crunch data on a daily basis.
+    We use a 20-node cluster for development, a 40-node cluster for offline
+    production processing and an EC2 cluster for the live web site.</dd>
+
+  <dt><a href="http://www.pnl.gov";>Pacific Northwest National 
Laboratory</a></dt>
+  <dd>Hadoop and HBase (Cloudera distribution) are being used within PNNL's
+    Computational Biology &amp; Bioinformatics Group for a systems biology data
+    warehouse project that integrates high throughput proteomics and 
transcriptomics
+    data sets coming from instruments in the Environmental  Molecular Sciences
+    Laboratory, a US Department of Energy national user facility located at 
PNNL.
+    The data sets are being merged and annotated with other public genomics
+    information in the data warehouse environment, with Hadoop analysis 
programs
+    operating on the annotated data in the HBase tables. This work is hosted by
+    <a href="http://www.pnl.gov/news/release.aspx?id=908";>olympus</a>, a large 
PNNL
+    institutional computing cluster, with the HBase tables being stored in 
olympus's
+    Lustre file system.</dd>
+
+  <dt><a href="http://www.readpath.com/";>ReadPath</a></dt>
+  <dd>|ReadPath uses HBase to store several hundred million RSS items and 
dictionary
+    for its RSS newsreader. Readpath is currently running on an 8 node 
cluster.</dd>
+
+  <dt><a href="http://resu.me/";>resu.me</a></dt>
+  <dd>Career network for the net generation. We use HBase and Hadoop for all
+    aspects of our backend - user and resume data storage, analytics 
processing,
+    machine learning algorithms for our job recommendation engine. Our live
+    production site is directly served from HBase. We use cascading for running
+    offline data processing jobs.</dd>
+
+  <dt><a href="http://www.runa.com/";>Runa Inc.</a></dt>
+  <dd>Runa Inc. offers a SaaS that enables online merchants to offer dynamic
+    per-consumer, per-product promotions embedded in their website. To 
implement
+    this we collect the click streams of all their visitors to determine along
+    with the rules of the merchant what promotion to offer the visitor at 
different
+    points of their browsing the Merchant website. So we have lots of data and 
have
+    to do lots of off-line and real-time analytics. HBase is the core for us.
+    We also use Clojure and our own open sourced distributed processing 
framework,
+    Swarmiji. The HBase Community has been key to our forward movement with 
HBase.
+    We're looking for experienced developers to join us to help make things go 
even
+    faster!</dd>
+
+  <dt><a href="http://www.sematext.com/";>Sematext</a></dt>
+  <dd>Sematext runs
+    <a href="http://www.sematext.com/search-analytics/index.html";>Search 
Analytics</a>,
+    a service that uses HBase to store search activity and MapReduce to produce
+    reports showing user search behaviour and experience. Sematext runs
+    <a href="http://www.sematext.com/spm/index.html";>Scalable Performance 
Monitoring (SPM)</a>,
+    a service that uses HBase to store performance data over time, crunch it 
with
+    the help of MapReduce, and display it in a visually rich browser-based UI.
+    Interestingly, SPM features
+    <a 
href="http://www.sematext.com/spm/hbase-performance-monitoring/index.html";>SPM 
for HBase</a>,
+    which is specifically designed to monitor all HBase performance 
metrics.</dd>
+
+  <dt><a href="http://www.socialmedia.com/";>SocialMedia</a></dt>
+  <dd>SocialMedia uses HBase to store and process user events which allows us 
to
+    provide near-realtime user metrics and reporting. HBase forms the heart of
+    our Advertising Network data storage and management system. We use HBase as
+    a data source and sink for both realtime request cycle queries and as a
+    backend for mapreduce analysis.</dd>
+
+  <dt><a href="http://www.splicemachine.com/";>Splice Machine</a></dt>
+  <dd>Splice Machine is built on top of HBase.  Splice Machine is a 
full-featured
+    ANSI SQL database that provides real-time updates, secondary indices, ACID
+    transactions, optimized joins, triggers, and UDFs.</dd>
+
+  <dt><a href="http://www.streamy.com/";>Streamy</a></dt>
+  <dd>Streamy is a recently launched realtime social news site.  We use HBase
+    for all of our data storage, query, and analysis needs, replacing an 
existing
+    SQL-based system.  This includes hundreds of millions of documents, sparse
+    matrices, logs, and everything else once done in the relational system. We
+    perform significant in-memory caching of query results similar to a 
traditional
+    Memcached/SQL setup as well as other external components to perform joining
+    and sorting.  We also run thousands of daily MapReduce jobs using HBase 
tables
+    for log analysis, attention data processing, and feed crawling.  HBase has
+    helped us scale and distribute in ways we could not otherwise, and the
+    community has provided consistent and invaluable assistance.</dd>
+
+  <dt><a href="http://www.stumbleupon.com/";>Stumbleupon</a></dt>
+  <dd>Stumbleupon and <a href="http://su.pr";>Su.pr</a> use HBase as a real time
+    data storage and analytics platform. Serving directly out of HBase, 
various site
+    features and statistics are kept up to date in a real time fashion. We also
+    use HBase a map-reduce data source to overcome traditional query speed 
limits
+    in MySQL.</dd>
+
+  <dt><a href="http://www.tokenizer.org";>Shopping Engine at Tokenizer</a></dt>
+  <dd>Shopping Engine at Tokenizer is a web crawler; it uses HBase to store 
URLs
+    and Outlinks (AnchorText + LinkedURL): more than a billion. It was 
initially
+    designed as Nutch-Hadoop extension, then (due to very specific 'shopping'
+    scenario) moved to SOLR + MySQL(InnoDB) (ten thousands queries per second),
+    and now - to HBase. HBase is significantly faster due to: no need for huge
+    transaction logs, column-oriented design exactly matches 'lazy' business 
logic,
+    data compression, !MapReduce support. Number of mutable 'indexes' (term 
from
+    RDBMS) significantly reduced due to the fact that each 'row::column' 
structure
+    is physically sorted by 'row'. MySQL InnoDB engine is best DB choice for
+    highly-concurrent updates. However, necessity to flash a block of data to
+    harddrive even if we changed only few bytes is obvious bottleneck. HBase
+    greatly helps: not-so-popular in modern DBMS 'delete-insert', 'mutable 
primary
+    key', and 'natural primary key' patterns become a big advantage with 
HBase.</dd>
+
+  <dt><a href="http://traackr.com/";>Traackr</a></dt>
+  <dd>Traackr uses HBase to store and serve online influencer data in 
real-time.
+    We use MapReduce to frequently re-score our entire data set as we keep 
updating
+    influencer metrics on a daily basis.</dd>
+
+  <dt><a href="http://trendmicro.com/";>Trend Micro</a></dt>
+  <dd>Trend Micro uses HBase as a foundation for cloud scale storage for a 
variety
+    of applications. We have been developing with HBase since version 0.1 and
+    production since version 0.20.0.</dd>
+
+  <dt><a href="http://www.twitter.com";>Twitter</a></dt>
+  <dd>Twitter runs HBase across its entire Hadoop cluster. HBase provides a
+    distributed, read/write backup of all  mysql tables in Twitter's production
+    backend, allowing engineers to run MapReduce jobs over the data while 
maintaining
+    the ability to apply periodic row updates (something that is more difficult
+    to do with vanilla HDFS).  A number of applications including people search
+    rely on HBase internally for data generation. Additionally, the operations
+    team uses HBase as a timeseries database for cluster-wide 
monitoring/performance
+    data.</dd>
+
+  <dt><a href="http://www.udanax.org";>Udanax.org</a></dt>
+  <dd>Udanax.org is a URL shortener which use 10 nodes HBase cluster to store 
URLs,
+    Web Log data and response the real-time request on its Web Server. This
+    application is now used for some twitter clients and a number of web sites.
+    Currently API requests are almost 30 per second and web redirection 
requests
+    are about 300 per second.</dd>
+
+  <dt><a href="http://www.veoh.com/";>Veoh Networks</a></dt>
+  <dd>Veoh Networks uses HBase to store and process visitor (human) and entity
+    (non-human) profiles which are used for behavioral targeting, demographic
+    detection, and personalization services.  Our site reads this data in
+    real-time (heavily cached) and submits updates via various batch map/reduce
+    jobs. With 25 million unique visitors a month storing this data in a 
traditional
+    RDBMS is not an option. We currently have a 24 node Hadoop/HBase cluster 
and
+    our profiling system is sharing this cluster with our other Hadoop data
+    pipeline processes.</dd>
+
+  <dt><a href="http://www.videosurf.com/";>VideoSurf</a></dt>
+  <dd>VideoSurf - "The video search engine that has taught computers to see".
+    We're using HBase to persist various large graphs of data and other 
statistics.
+    HBase was a real win for us because it let us store substantially larger
+    datasets without the need for manually partitioning the data and its
+    column-oriented nature allowed us to create schemas that were substantially
+    more efficient for storing and retrieving data.</dd>
+
+  <dt><a href="http://www.visibletechnologies.com/";>Visible 
Technologies</a></dt>
+  <dd>Visible Technologies uses Hadoop, HBase, Katta, and more to collect, 
parse,
+    store, and search hundreds of millions of Social Media content. We get 
incredibly
+    fast throughput and very low latency on commodity hardware. HBase enables 
our
+    business to exist.</dd>
+
+  <dt><a href="http://www.worldlingo.com/";>WorldLingo</a></dt>
+  <dd>The WorldLingo Multilingual Archive. We use HBase to store millions of
+    documents that we scan using Map/Reduce jobs to machine translate them into
+    all or selected target languages from our set of available machine 
translation
+    languages. We currently store 12 million documents but plan to eventually
+    reach the 450 million mark. HBase allows us to scale out as we need to grow
+    our storage capacities. Combined with Hadoop to keep the data replicated 
and
+    therefore fail-safe we have the backbone our service can rely on now and in
+    the future. !WorldLingo is using HBase since December 2007 and is along 
with
+    a few others one of the longest running HBase installation. Currently we 
are
+    running the latest HBase 0.20 and serving directly from it at
+    <a 
href="http://www.worldlingo.com/ma/enwiki/en/HBase";>MultilingualArchive</a>.</dd>
+
+  <dt><a href="http://www.yahoo.com/";>Yahoo!</a></dt>
+  <dd>Yahoo! uses HBase to store document fingerprint for detecting 
near-duplications.
+    We have a cluster of few nodes that runs HDFS, mapreduce, and HBase. The 
table
+    contains millions of rows. We use this for querying duplicated documents 
with
+    realtime traffic.</dd>
+
+  <dt><a 
href="http://h50146.www5.hp.com/products/software/security/icewall/eng/";>HP 
IceWall SSO</a></dt>
+  <dd>HP IceWall SSO is a web-based single sign-on solution and uses HBase to 
store
+    user data to authenticate users. We have supported RDB and LDAP previously 
but
+    have newly supported HBase with a view to authenticate over tens of 
millions
+    of users and devices.</dd>
+
+  <dt><a 
href="http://www.ymc.ch/en/big-data-analytics-en?utm_source=hadoopwiki&amp;utm_medium=poweredbypage&amp;utm_campaign=ymc.ch";>YMC
 AG</a></dt>
+  <dd><ul>
+    <li>operating a Cloudera Hadoop/HBase cluster for media monitoring 
purpose</li>
+    <li>offering technical and operative consulting for the Hadoop stack + 
ecosystem</li>
+    <li>editor of <a 
href="http://www.ymc.ch/en/hbase-split-visualisation-introducing-hannibal?utm_source=hadoopwiki&amp;utm_medium=poweredbypageamp;utm_campaign=ymc.ch";>Hannibal</a>,
 a open-source tool
+    to visualize HBase regions sizes and splits that helps running HBase in 
production</li>
+  </ul></dd>
+  </dl>
+</section>
+</body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/pseudo-distributed.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/pseudo-distributed.xml 
b/src/site/xdoc/pseudo-distributed.xml
new file mode 100644
index 0000000..670f1e7
--- /dev/null
+++ b/src/site/xdoc/pseudo-distributed.xml
@@ -0,0 +1,42 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd";>
+
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title> 
+Running Apache HBase (TM) in pseudo-distributed mode
+    </title>
+  </properties>
+
+  <body>
+      <p>This page has been retired.  The contents have been moved to the 
+      <a href="http://hbase.apache.org/book.html#distributed";>Distributed 
Operation: Pseudo- and Fully-distributed modes</a> section
+ in the Reference Guide.
+ </p>
+
+ </body>
+
+</document>
+

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/replication.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/replication.xml b/src/site/xdoc/replication.xml
new file mode 100644
index 0000000..a2fcfcb
--- /dev/null
+++ b/src/site/xdoc/replication.xml
@@ -0,0 +1,35 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd";>
+
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title>
+      Apache HBase (TM) Replication
+    </title>
+  </properties>
+  <body>
+    <p>This information has been moved to <a 
href="http://hbase.apache.org/book.html#cluster_replication";>the Cluster 
Replication</a> section of the <a 
href="http://hbase.apache.org/book.html";>Apache HBase Reference Guide</a>.</p>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/resources.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/resources.xml b/src/site/xdoc/resources.xml
new file mode 100644
index 0000000..19548b6
--- /dev/null
+++ b/src/site/xdoc/resources.xml
@@ -0,0 +1,45 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title>Other Apache HBase (TM) Resources</title>
+  </properties>
+
+<body>
+<section name="Other Apache HBase Resources">
+<section name="Books">
+<section name="HBase: The Definitive Guide">
+<p><a href="http://shop.oreilly.com/product/0636920014348.do";>HBase: The 
Definitive Guide <i>Random Access to Your Planet-Size Data</i></a> by Lars 
George. Publisher: O'Reilly Media, Released: August 2011, Pages: 556.</p>
+</section>
+<section name="HBase In Action">
+<p><a href="http://www.manning.com/dimidukkhurana/";>HBase In Action</a> By 
Nick Dimiduk and Amandeep Khurana.  Publisher: Manning, MEAP Began: January 
2012, Softbound print: Fall 2012, Pages: 350.</p>
+</section>
+<section name="HBase Administration Cookbook">
+<p><a 
href="http://www.packtpub.com/hbase-administration-for-optimum-database-performance-cookbook/book";>HBase
 Administration Cookbook</a> by Yifeng Jiang.  Publisher: PACKT Publishing, 
Release: Expected August 2012, Pages: 335.</p>
+</section>
+<section name="HBase High Performance Cookbook">
+  <p><a 
href="https://www.packtpub.com/big-data-and-business-intelligence/hbase-high-performance-cookbook";>HBase
 High Performance Cookbook</a> by Ruchir Choudhry.  Publisher: PACKT 
Publishing, Release: January 2017, Pages: 350.</p>
+</section>
+</section>
+</section>
+</body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/sponsors.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/sponsors.xml b/src/site/xdoc/sponsors.xml
new file mode 100644
index 0000000..332f56a
--- /dev/null
+++ b/src/site/xdoc/sponsors.xml
@@ -0,0 +1,50 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title>Apache HBase&#153; Sponsors</title>
+  </properties>
+
+<body>
+<section name="Sponsors">
+    <p>First off, thanks to <a 
href="http://www.apache.org/foundation/thanks.html";>all who sponsor</a>
+       our parent, the Apache Software Foundation.
+    </p>
+<p>The below companies have been gracious enough to provide their commerical 
tool offerings free of charge to the Apache HBase&#153; project.
+<ul>
+       <li>The crew at <a 
href="http://www.ej-technologies.com/";>ej-technologies</a> have
+        been let us use <a 
href="http://www.ej-technologies.com/products/jprofiler/overview.html";>JProfiler</a>
 for years now.</li>
+       <li>The lads at <a href="http://headwaysoftware.com/";>headway 
software</a> have
+        given us a license for <a 
href="http://headwaysoftware.com/products/?code=Restructure101";>Restructure101</a>
+        so we can untangle our interdependency mess.</li>
+       <li><a href="http://www.yourkit.com";>YourKit</a> allows us to use their 
<a href="http://www.yourkit.com/overview/index.jsp";>Java Profiler</a>.</li>
+       <li>Some of us use <a href="http://www.jetbrains.com/idea";>IntelliJ 
IDEA</a> thanks to <a href="http://www.jetbrains.com/";>JetBrains</a>.</li>
+  <li>Thank you to Boris at <a href="http://www.vectorportal.com/";>Vector 
Portal</a> for granting us a license on the <a 
href="http://www.vectorportal.com/subcategory/205/KILLER-WHALE-FREE-VECTOR.eps/ifile/9136/detailtest.asp";>image</a>
 on which our logo is based.</li>
+</ul>
+</p>
+</section>
+<section name="Sponsoring the Apache Software Foundation">
+<p>To contribute to the Apache Software Foundation, a good idea in our 
opinion, see the <a 
href="http://www.apache.org/foundation/sponsorship.html";>ASF Sponsorship</a> 
page.
+</p>
+</section>
+</body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/7da47509/src/site/xdoc/supportingprojects.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/supportingprojects.xml 
b/src/site/xdoc/supportingprojects.xml
new file mode 100644
index 0000000..f949a57
--- /dev/null
+++ b/src/site/xdoc/supportingprojects.xml
@@ -0,0 +1,161 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 
http://maven.apache.org/xsd/xdoc-2.0.xsd";>
+  <properties>
+    <title>Supporting Projects</title>
+  </properties>
+
+<body>
+<section name="Supporting Projects">
+  <p>This page is a list of projects that are related to HBase. To
+    have your project added, file a documentation JIRA or email
+    <a href="mailto:d...@hbase.apache.org";>hbase-dev</a> with the relevant
+    information. If you notice out-of-date information, use the same avenues to
+    report it.
+  </p>
+  <p><b>These items are user-submitted and the HBase team assumes no 
responsibility for their accuracy.</b></p>
+  <h3>Projects that add new features to HBase</h3>
+  <dl>
+   <dt><a href="https://github.com/XiaoMi/themis/";>Themis</a></dt>
+   <dd>Themis provides cross-row/cross-table transaction on HBase based on
+    Google's Percolator.</dd>
+   <dt><a href="https://github.com/caskdata/tephra";>Tephra</a></dt>
+   <dd>Cask Tephra provides globally consistent transactions on top of Apache
+    HBase.</dd>
+   <dt><a href="https://github.com/VCNC/haeinsa";>Haeinsa</a></dt>
+   <dd>Haeinsa is linearly scalable multi-row, multi-table transaction library
+    for HBase.</dd>
+   <dt><a href="https://github.com/juwi/HBase-TAggregator";>HBase 
TAggregator</a></dt>
+   <dd>An HBase coprocessor for timeseries-based aggregations.</dd>
+   <dt><a href="http://trafodion.incubator.apache.org/";>Apache 
Trafodion</a></dt>
+   <dd>Apache Trafodion is a webscale SQL-on-Hadoop solution enabling
+    transactional or operational workloads on Hadoop.</dd>
+   <dt><a href="http://phoenix.apache.org/";>Apache Phoenix</a></dt>
+   <dd>Apache Phoenix is a relational database layer over HBase delivered as a
+    client-embedded JDBC driver targeting low latency queries over HBase 
data.</dd>
+   <dt><a href="https://github.com/cloudera/hue/tree/master/apps/hbase";>Hue 
HBase Browser</a></dt>
+   <dd>An Easy &amp; Powerful WebUI for HBase, distributed with <a 
href="https://www.gethue.com";>Hue</a>.</dd>
+   <dt><a 
href="https://github.com/NGDATA/hbase-indexer/tree/master/hbase-sep";>HBase 
SEP</a></dt>
+   <dd>the HBase Side Effect Processor, a system for asynchronously and 
reliably listening to HBase
+    mutation events, based on HBase replication.</dd>
+   <dt><a href="https://github.com/ngdata/hbase-indexer";>Lily HBase 
Indexer</a></dt>
+   <dd>indexes HBase content to Solr by listening to the replication stream
+    (uses the HBase SEP).</dd>
+   <dt><a href="https://github.com/sonalgoyal/crux/";>Crux</a></dt>
+   <dd> - HBase Reporting and Analysis with support for simple and composite 
keys,
+    get and range scans, column based filtering, charting.</dd>
+   <dt><a href="https://github.com/yahoo/omid/";>Omid</a></dt>
+   <dd> - Lock-free transactional support on top of HBase providing Snapshot
+    Isolation.</dd>
+   <dt><a href="http://dev.tailsweep.com/projects/parhely";>Parhely</a></dt>
+   <dd>ORM for HBase</dd>
+   <dt><a href="http://code.google.com/p/hbase-writer/";>HBase-Writer</a></dt>
+   <dd> Heritrix2 Processor for writing crawls to HBase.</dd>
+   <dt><a href="http://www.pigi-project.org/";>Pigi Project</a></dt>
+   <dd>The Pigi Project is an ORM-like framework. It includes a configurable
+    index system and a simple object to HBase mapping framework (or indexing 
for
+    HBase if you like).  Designed for use by web applications.</dd>
+   <dt><a href="http://code.google.com/p/hbase-thrift/";>hbase-thrift</a></dt>
+   <dd>hbase-thrift generates and installs Perl and Python Thrift bindings for
+    HBase.</dd>
+   <dt><a href="http://belowdeck.kissintelligentsystems.com/ohm";>OHM</a></dt>
+   <dd>OHM is a weakly relational ORM for HBase which provides Object Mapping 
and
+    Column indexing. It has its own compiler capable of generating interface
+    code for multiple languages. Currently C# (via the Thrift API), with 
support
+    for Java currently in development. The compiler is easily extensible to add
+    support for other languages.</dd>
+   <dt><a href="http://datastore.googlecode.com";>datastore</a></dt>
+   <dd>Aims to be an implementation of the
+    <a href="http://code.google.com/appengine/docs/python/datastore/";>Google 
app-engine datastore</a>
+    in Java using HBase instead of bigtable.</dd>
+   <dt><a href="http://datanucleus.org";>DataNucleus</a></dt>
+   <dd>DataNucleus is a Java JDO/JPA/REST implementation. It supports HBase and
+    many other datastores.</dd>
+   <dt><a href="http://github.com/impetus-opensource/Kundera";>Kundera</a></dt>
+   <dd>Kundera is a JPA 2.0 based object-datastore mapping library for HBase,
+    Cassandra and MongoDB.</dd>
+   <dt><a href="http://github.com/zohmg/zohmg/tree/master";>Zohmg</a></dt>
+   <dd>Zohmg is a time-series data store that uses HBase as its backing 
store.</dd>
+   <dt><a href="http://grails.org/plugin/gorm-hbase";>Grails Support</a></dt>
+   <dd>Grails HBase plug-in.</dd>
+   <dt><a href="http://www.bigrecord.org";>BigRecord</a></dt>
+   <dd>is an active_record-based object mapping layer for Ruby on Rails.</dd>
+   <dt><a 
href="http://github.com/greglu/hbase-stargate";>hbase-stargate</a></dt>
+   <dd>Ruby client for HBase Stargate.</dd>
+   <dt><a href="http://github.com/ghelmling/meetup.beeno";>Meetup.Beeno</a></dt>
+   <dd>Meetup.Beeno is a simple HBase Java "beans" mapping framework based on
+    annotations. It includes a rudimentary high level query API that generates
+    the appropriate server-side filters.</dd>
+   <dt><a href="http://www.springsource.org/spring-data/hadoop";>Spring 
Hadoop</a></dt>
+   <dd> - The Spring Hadoop project provides support for writing Apache Hadoop
+    applications that benefit from the features of Spring, Spring Batch and
+    Spring Integration.</dd>
+   <dt><a href="https://jira.springsource.org/browse/SPR-5950";>Spring 
Framework HBase Template</a></dt>
+   <dd>Spring Framework HBase Template provides HBase data access templates
+    similar to what is provided in Spring for JDBC, Hibernate, iBatis, etc.
+    If you find this useful, please vote for its inclusion in the Spring 
Framework.</dd>
+   <dt><a 
href="http://github.com/davidsantiago/clojure-hbase";>Clojure-HBase</a></dt>
+   <dd>A library for convenient access to HBase from Clojure.</dd>
+   <dt><a 
href="http://www.lilyproject.org/lily/about/playground/hbaseindexes.html";>HBase 
indexing library</a></dt>
+   <dd>A library for building and querying HBase-table-based indexes.</dd>
+   <dt><a href="http://github.com/akkumar/hbasene";>HBasene</a></dt>
+   <dd>Lucene+HBase - Using HBase as the backing store for the TF-IDF
+    representations needed by Lucene. Also, contains a library for constructing
+    lucene indices from HBase schema.</dd>
+   <dt><a href="http://github.com/larsgeorge/jmxtoolkit";>JMXToolkit</a></dt>
+   <dd>A HBase tailored JMX toolkit enabling monitoring with Cacti and checking
+    with Nagios or similar.</dd>
+   <dt><a href="http://github.com/ykulbak/ihbase";>IHBASE</a></dt>
+   <dd>IHBASE provides faster scans by indexing regions, each region has its 
own
+    index. The indexed columns are user-defined and indexes can be intersected 
or
+    joined in a single query.</dd>
+   <dt><a href="http://github.com/apurtell/hbase-ec2";>HBASE EC2 
scripts</a></dt>
+   <dd>This collection of bash scripts allows you to run HBase clusters on
+    Amazon's Elastic Compute Cloud (EC2) service with best practices baked 
in.</dd>
+   <dt><a href="http://github.com/apurtell/hbase-stargate";>Stargate</a></dt>
+   <dd>Stargate provides an enhanced RESTful interface.</dd>
+   <dt><a 
href="http://github.com/hbase-trx/hbase-transactional-tableindexed";>HBase-trx</a></dt>
+   <dd>HBase-trx provides Transactional (JTA) and indexed extensions of 
HBase.</dd>
+   <dt><a href="http://github.com/simplegeo/python-hbase-thrift";>HBase Thrift 
Python client Debian package</a></dt>
+   <dd>Debian packages for the HBase Thrift Python client (see readme for
+    sources.list setup)</dd>
+   <dt><a href="http://github.com/amitrathore/capjure";>capjure</a></dt>
+   <dd>capjure is a persistence helper for HBase. It is written in the Clojure
+    language, and supports persisting of native hash-maps.</dd>
+   <dt><a href="http://github.com/sematext/HBaseHUT";>HBaseHUT</a></dt>
+   <dd>(High Update Throughput for HBase) It focuses on write performance 
during
+    records update (by avoiding doing Get on every Put to update record).</dd>
+   <dt><a href="http://github.com/sematext/HBaseWD";>HBaseWD</a></dt>
+   <dd>HBase Writes Distributor spreads records over the cluster even when 
their
+    keys are sequential, while still allowing fast range scans over them</dd>
+   <dt><a href="http://code.google.com/p/hbase-jdo/";>HBase UI Tool &amp; 
Util</a></dt>
+   <dd>HBase UI Tool &amp; Util is an HBase UI client and simple util module.
+    It can handle hbase more easily like jdo(not persistence api)</dd>
+  </dl>
+  <h3>Example HBase Applications</h3>
+  <ul>
+    <li><a href="http://github.com/andreisavu/feedaggregator";>HBase powered 
feed aggregator</a>
+    by Savu Andrei -- 200909</li>
+  </ul>
+</section>
+</body>
+</document>

Reply via email to