Author: jamestaylor
Date: Fri Mar 11 04:33:05 2016
New Revision: 1734494

URL: http://svn.apache.org/viewvc?rev=1734494&view=rev
Log:
Updating project description based on transaction support

Modified:
    phoenix/doap_phoenix.rdf
    phoenix/site/publish/index.html
    phoenix/site/source/src/site/markdown/index.md

Modified: phoenix/doap_phoenix.rdf
URL: 
http://svn.apache.org/viewvc/phoenix/doap_phoenix.rdf?rev=1734494&r1=1734493&r2=1734494&view=diff
==============================================================================
--- phoenix/doap_phoenix.rdf (original)
+++ phoenix/doap_phoenix.rdf Fri Mar 11 04:33:05 2016
@@ -27,8 +27,8 @@
     <name>Apache Phoenix</name>
     <homepage rdf:resource="http://phoenix.apache.org"; />
     <asfext:pmc rdf:resource="http://phoenix.apache.org"; />
-    <shortdesc>Apache Phoenix is a relational database layer on top of Apache 
HBase.</shortdesc>
-    <description>Apache Phoenix is a relational database layer on top of 
Apache HBase. It is accessed as a JDBC driver and enables querying, updating, 
and managing HBase tables through standard SQL. Instead of using map-reduce, 
Apache Phoenix compiles your SQL query into a series of HBase scans and 
orchestrates the running of those scans to produce regular JDBC result sets. 
Direct use of the HBase API, along with coprocessors and custom filters, 
results in performance on the order of milliseconds for small queries, or 
seconds for tens of millions of rows.</description>
+    <shortdesc>Apache Phoenix enables OLTP and SQL-based operational analytics 
for Hadoop</shortdesc>
+    <description>Apache Phoenix enables OLTP and operational analytics for 
Hadoop by providing a relational database layer leveraging Apache HBase as its 
backing store. It includes integration with Spark, Pig, Flume, Map Reduce, and 
other products in the Hadoop ecosystem. It is accessed as a JDBC driver and 
enables querying, updating, and managing HBase tables through standard 
SQL.</description>
     <bug-database rdf:resource="http://issues.apache.org/jira/browse/PHOENIX"; 
/>
     <mailing-list rdf:resource="http://phoenix.apache.org/mailing_list.html"; />
     <download-page rdf:resource="http://phoenix.apache.org/download.html"; />
@@ -38,37 +38,23 @@
     <category rdf:resource="http://projects.apache.org/category/database"; />
     <release>
       <Version>
-        <name>Apache Phoenix 3.3.1</name>
-        <created>2015-04-07</created>
-        <revision>3.3.1</revision>
+        <name>Apache Phoenix 4.7.0 HBase 0.98</name>
+        <created>2016-03-08</created>
+        <revision>4.7.0-HBase-0.98</revision>
       </Version>
     </release>
     <release>
       <Version>
-        <name>Apache Phoenix 4.3.1</name>
-        <created>2015-04-07</created>
-        <revision>4.3.1</revision>
+        <name>Apache Phoenix 4.7.0 HBase 1.0</name>
+        <created>2016-03-08</created>
+        <revision>4.7.0-HBase-1.0</revision>
       </Version>
     </release>
     <release>
       <Version>
-        <name>Apache Phoenix 4.4.0 HBase 0.98</name>
-        <created>2015-05-18</created>
-        <revision>4.4.0-HBase-0.98</revision>
-      </Version>
-    </release>
-    <release>
-      <Version>
-        <name>Apache Phoenix 4.4.0 HBase 1.0</name>
-        <created>2015-05-18</created>
-        <revision>4.4.0-HBase-1.0</revision>
-      </Version>
-    </release>
-    <release>
-      <Version>
-        <name>Apache Phoenix 4.4.0 HBase 1.1</name>
-        <created>2015-05-29</created>
-        <revision>4.4.0-HBase-1.1</revision>
+        <name>Apache Phoenix 4.7.0 HBase 1.1</name>
+        <created>2016-03-08</created>
+        <revision>4.7.0-HBase-1.1</revision>
       </Version>
     </release>
     <repository>

Modified: phoenix/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/index.html?rev=1734494&r1=1734493&r2=1734494&view=diff
==============================================================================
--- phoenix/site/publish/index.html (original)
+++ phoenix/site/publish/index.html Fri Mar 11 04:33:05 2016
@@ -147,7 +147,7 @@
 <div class="section"> 
  <div class="section"> 
   <div class="section"> 
-   <h4 align="center" 
id="High_performance_relational_database_layer_over_HBase_for_low_latency_applications">High
 performance relational database layer over HBase for low latency 
applications</h4> 
+   <h4 align="center" id="OLTP_and_operational_analytics_for_Hadoop">OLTP and 
operational analytics for Hadoop</h4> 
    <p></p> 
    <br /> 
    <p><span> </span></p> 
@@ -178,7 +178,6 @@
     </tbody> 
    </table> 
    <p><span id="alerts" style="background-color:#ffc; text-align: 
center;display: block;padding:10px; border-bottom: solid 1px #cc9"> <b><a 
href="news.html">News</a>:</b> Announcing <a 
href="transactions.html">transaction support</a> in 4.7.0 release &nbsp; &nbsp; 
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a 
class="externalLink" href="https://twitter.com/ApachePhoenix";><img 
src="images/follow.png" title="Follow Apache Phoenix on Twitter" alt="" 
/></a></span></p> 
-   <hr /> 
   </div> 
  </div> 
 </div> 
@@ -186,12 +185,17 @@
  <div class="page-header">
   <h2 id="Overview">Overview</h2>
  </div> 
- <p>Apache Phoenix is a relational database layer over HBase supporting full 
ACID transactions and delivered as a client-embedded JDBC driver that targets 
low latency queries over HBase data. Apache Phoenix takes your SQL query, 
compiles it into a series of HBase scans, and orchestrates the running of those 
scans to produce regular JDBC result sets. The table metadata is stored in an 
HBase table and versioned, such that snapshot queries over prior versions will 
automatically use the correct schema. Direct use of the HBase API, along with 
coprocessors and custom filters, results in <a 
href="performance.html">performance</a> on the order of milliseconds for small 
queries, or seconds for tens of millions of rows. </p> 
+ <p>Apache Phoenix enables OLTP and operational analytics in Hadoop for low 
latency applications by combining the best of both worlds:</p> 
+ <ul> 
+  <li>the power of standard SQL and JDBC APIs with full ACID transaction 
capabilities and</li> 
+  <li>the flexibility of late-bound, schema-on-read capabilities from the 
NoSQL world by leveraging HBase as its backing store</li> 
+ </ul> 
+ <p>Apache Phoenix is fully integrated with other Hadoop products such as 
Spark, Hive, Pig, Flume, and Map Reduce.</p> 
  <p align="center"> <br />Who is using Apache Phoenix? Read more <a 
href="who_is_using.html">here...</a><br /> <img src="images/using/all.png" 
alt="" /> </p> 
 </div> 
 <div class="section"> 
  <h2 id="Mission">Mission</h2> 
- <p>Become the standard means of accessing HBase data through a well-defined, 
industry standard API.</p> 
+ <p>Become the trusted data platform for OLTP and operational analytics for 
Hadoop through well-defined, industry standard APIs.</p> 
 </div> 
 <div class="section"> 
  <h2 id="Quick_Start">Quick Start</h2> 
@@ -199,31 +203,30 @@
 </div> 
 <div class="section"> 
  <h2 id="SQL_Support">SQL Support</h2> 
- <p>To see what’s supported, go to our <a 
href="language/index.html">language reference</a>. It includes all typical SQL 
query statement clauses, including <tt>SELECT</tt>, <tt>FROM</tt>, 
<tt>WHERE</tt>, <tt>GROUP BY</tt>, <tt>HAVING</tt>, <tt>ORDER BY</tt>, etc. It 
also supports a full set of DML commands as well as table creation and 
versioned incremental alterations through our DDL commands. We try to follow 
the SQL standards wherever possible.</p> 
- <p><a name="connStr" id="connStr"></a>Use JDBC to get a connection to an 
HBase cluster like this:</p> 
- <div> 
-  <pre><tt>Connection conn = 
DriverManager.getConnection(&quot;jdbc:phoenix:server1,server2:3333&quot;,props);</tt></pre>
 
- </div> 
- <p>where <tt>props</tt> are optional properties which may include Phoenix and 
HBase configuration properties, and the connection string which is composed of: 
</p> 
- <div> 
-  <pre><tt>jdbc:phoenix</tt> [ <tt>:&lt;zookeeper quorum&gt;</tt> [ 
<tt>:&lt;port number&gt;</tt> ] [ <tt>:&lt;root node&gt;</tt> ] [ 
<tt>:&lt;principal&gt;</tt> ] [ <tt>:&lt;keytab file&gt;</tt> ] ] </pre> 
- </div> 
- <p>For any omitted parts, the relevant property value, 
hbase.zookeeper.quorum, hbase.zookeeper.property.clientPort, and 
zookeeper.znode.parent will be used from hbase-site.xml configuration file. The 
optional <tt>principal</tt> and <tt>keytab file</tt> may be used to connect to 
a Kerberos secured cluster. If only <tt>principal</tt> is specified, then this 
defines the user name with each distinct user having their own dedicated HBase 
connection (HConnection). This provides a means of having multiple, different 
connections each with different configuration properties on the same JVM.</p> 
- <p>For example, the following connection string might be used for longer 
running queries, where the <tt>longRunningProps</tt> specifies Phoenix and 
HBase configuration properties with longer timeouts: </p> 
- <div> 
-  <pre><tt>Connection conn = 
DriverManager.getConnection(“jdbc:phoenix:my_server:longRunning”, 
longRunningProps);</tt></pre> 
- </div> while the following connection string might be used for shorter 
running queries: 
- <div> 
-  <pre><tt>Connection conn = 
DriverManager.getConnection(&quot;jdbc:phoenix:my_server:shortRunning&quot;, 
shortRunningProps);</tt></pre> 
- </div> 
+ <p>Apache Phoenix takes your SQL query, compiles it into a series of HBase 
scans, and orchestrates the running of those scans to produce regular JDBC 
result sets. Direct use of the HBase API, along with coprocessors and custom 
filters, results in <a href="performance.html">performance</a> on the order of 
milliseconds for small queries, or seconds for tens of millions of rows.</p> 
+ <p>To see a complete list of what is supported, go to our <a 
href="language/index.html">language reference</a>. All standard SQL query 
constructs are supported, including <tt>SELECT</tt>, <tt>FROM</tt>, 
<tt>WHERE</tt>, <tt>GROUP BY</tt>, <tt>HAVING</tt>, <tt>ORDER BY</tt>, etc. It 
also supports a full set of DML commands as well as table creation and 
versioned incremental alterations through our DDL commands.</p> 
+ <p>Here’s a list of what is currently <b>not</b> supported:</p> 
+ <ul> 
+  <li><b>Relational operators</b>. Intersect, Minus.</li> 
+  <li><b>Miscellaneous built-in functions</b>. These are easy to add - read 
this <a class="externalLink" 
href="http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html";>blog</a>
 for step by step instructions.</li> 
+ </ul> 
  <div class="section"> 
-  <div class="section"> 
-   <h4 id="Not_Supported">Not Supported</h4> 
-   <p>Here’s a list of what is currently <b>not</b> supported:</p> 
-   <ul> 
-    <li><b>Relational operators</b>. Intersect, Minus.</li> 
-    <li><b>Miscellaneous built-in functions</b>. These are easy to add - read 
this <a class="externalLink" 
href="http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html";>blog</a>
 for step by step instructions.</li> 
-   </ul> 
+  <h3 id="connStr">Connection<a name="Connection"></a></h3> 
+  <p>Use JDBC to get a connection to an HBase cluster like this:</p> 
+  <div> 
+   <pre><tt>Connection conn = 
DriverManager.getConnection(&quot;jdbc:phoenix:server1,server2:3333&quot;,props);</tt></pre>
 
+  </div> 
+  <p>where <tt>props</tt> are optional properties which may include Phoenix 
and HBase configuration properties, and the connection string which is composed 
of: </p> 
+  <div> 
+   <pre><tt>jdbc:phoenix</tt> [ <tt>:&lt;zookeeper quorum&gt;</tt> [ 
<tt>:&lt;port number&gt;</tt> ] [ <tt>:&lt;root node&gt;</tt> ] [ 
<tt>:&lt;principal&gt;</tt> ] [ <tt>:&lt;keytab file&gt;</tt> ] ] </pre> 
+  </div> 
+  <p>For any omitted parts, the relevant property value, 
hbase.zookeeper.quorum, hbase.zookeeper.property.clientPort, and 
zookeeper.znode.parent will be used from hbase-site.xml configuration file. The 
optional <tt>principal</tt> and <tt>keytab file</tt> may be used to connect to 
a Kerberos secured cluster. If only <tt>principal</tt> is specified, then this 
defines the user name with each distinct user having their own dedicated HBase 
connection (HConnection). This provides a means of having multiple, different 
connections each with different configuration properties on the same JVM.</p> 
+  <p>For example, the following connection string might be used for longer 
running queries, where the <tt>longRunningProps</tt> specifies Phoenix and 
HBase configuration properties with longer timeouts: </p> 
+  <div> 
+   <pre><tt>Connection conn = 
DriverManager.getConnection(“jdbc:phoenix:my_server:longRunning”, 
longRunningProps);</tt></pre> 
+  </div> while the following connection string might be used for shorter 
running queries: 
+  <div> 
+   <pre><tt>Connection conn = 
DriverManager.getConnection(&quot;jdbc:phoenix:my_server:shortRunning&quot;, 
shortRunningProps);</tt></pre> 
   </div> 
  </div> 
 </div> 
@@ -241,7 +244,7 @@
 </div> 
 <div class="section"> 
  <h2 id="schema">Schema<a name="Schema"></a></h2> 
- <p>Apache Phoenix supports table creation and versioned incremental 
alterations through DDL commands. The table metadata is stored in an HBase 
table.</p> 
+ <p>Apache Phoenix supports table creation and versioned incremental 
alterations through DDL commands. The table metadata is stored in an HBase 
table and versioned, such that snapshot queries over prior versions will 
automatically use the correct schema. </p> 
  <p>A Phoenix table is created through the <a 
href="language/index.html#create">CREATE TABLE</a> command and can either 
be:</p> 
  <ol style="list-style-type: decimal"> 
   <li><b>built from scratch</b>, in which case the HBase table and column 
families will be created automatically.</li> 

Modified: phoenix/site/source/src/site/markdown/index.md
URL: 
http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/index.md?rev=1734494&r1=1734493&r2=1734494&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/index.md (original)
+++ phoenix/site/source/src/site/markdown/index.md Fri Mar 11 04:33:05 2016
@@ -2,7 +2,7 @@
 <br/>
 <p align="center">
 <img src="images/phoenix-logo-small.png"/>
-<h4 align="center">High performance relational database layer over HBase for 
low latency applications</h4> 
+<h4 align="center">OLTP and operational analytics for Hadoop</h4> 
 </p>
 <br/>
 
@@ -48,27 +48,36 @@
 Announcing [transaction support](transactions.html) in 4.7.0 release 
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 
&nbsp; 
 <a href='https://twitter.com/ApachePhoenix'><img title="Follow Apache Phoenix 
on Twitter" src="images/follow.png"/></a></span>
-
-<hr/>
-
 ## Overview
+Apache Phoenix enables OLTP and operational analytics in Hadoop for low 
latency applications by combining the best of both worlds:
 
-Apache Phoenix is a relational database layer over HBase supporting full ACID 
transactions and delivered as a client-embedded JDBC driver that targets low 
latency queries over HBase data. Apache Phoenix takes your SQL query, compiles 
it into a series of HBase scans, and orchestrates the running of those scans to 
produce regular JDBC result sets. The table metadata is stored in an HBase 
table and versioned, such that snapshot queries over prior versions will 
automatically use the correct schema. Direct use of the HBase API, along with 
coprocessors and custom filters, results in [performance](performance.html) on 
the order of milliseconds for small queries, or seconds for tens of millions of 
rows. 
+* the power of standard SQL and JDBC APIs with full ACID transaction 
capabilities and
+* the flexibility of late-bound, schema-on-read capabilities from the NoSQL 
world by leveraging HBase as its backing store
+
+Apache Phoenix is fully integrated with other Hadoop products such as Spark, 
Hive, Pig, Flume, and Map Reduce.
 
 <p align="center">
 <br/>Who is using Apache Phoenix? Read more <a 
href="who_is_using.html">here...</a><br/>
 <img src="images/using/all.png"/>
 </p>
 ## Mission
-Become the standard means of accessing HBase data through a well-defined, 
industry standard API.
+Become the trusted data platform for OLTP and operational analytics for Hadoop 
through well-defined, industry standard APIs.
 
 ## Quick Start
 Tired of reading already and just want to get started? Take a look at our 
[FAQs](faq.html), listen to the Apache Phoenix talk from [Hadoop Summit 
2015](https://www.youtube.com/watch?v=XGa0SyJMH94), review the [overview 
presentation](http://phoenix.apache.org/presentations/OC-HUG-2014-10-4x3.pdf), 
and jump over to our quick start guide 
[here](Phoenix-in-15-minutes-or-less.html).
 
 ## SQL Support
-To see what's supported, go to our [language reference](language/index.html). 
It includes all typical SQL query statement clauses, including `SELECT`, 
`FROM`, `WHERE`, `GROUP BY`, `HAVING`, `ORDER BY`, etc. It also supports a full 
set of DML commands as well as table creation and versioned incremental 
alterations through our DDL commands. We try to follow the SQL standards 
wherever possible.
+Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, 
and orchestrates the running of those scans to produce regular JDBC result 
sets. Direct use of the HBase API, along with coprocessors and custom filters, 
results in [performance](performance.html) on the order of milliseconds for 
small queries, or seconds for tens of millions of rows.
+
+To see a complete list of what is supported, go to our [language 
reference](language/index.html). All standard SQL query constructs are 
supported, including `SELECT`, `FROM`, `WHERE`, `GROUP BY`, `HAVING`, `ORDER 
BY`, etc. It also supports a full set of DML commands as well as table creation 
and versioned incremental alterations through our DDL commands.
 
-<a id="connStr"></a>Use JDBC to get a connection to an HBase cluster like this:
+Here's a list of what is currently **not** supported:
+
+* **Relational operators**. Intersect, Minus.
+* **Miscellaneous built-in functions**. These are easy to add - read this 
[blog](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html)
 for step by step instructions.
+
+###<a id="connStr"></a>Connection
+Use JDBC to get a connection to an HBase cluster like this:
 
 <pre><code>Connection conn = 
DriverManager.getConnection("jdbc:phoenix:server1,server2:3333",props);</code></pre>
 where <code>props</code> are optional properties which may include Phoenix and 
HBase configuration properties, and
@@ -89,13 +98,7 @@ while the following connection string mi
 <pre><code>Connection conn = 
DriverManager.getConnection("jdbc:phoenix:my_server:shortRunning", 
shortRunningProps);</code></pre>
 
 
-####Not Supported
-Here's a list of what is currently **not** supported:
-
-* **Relational operators**. Intersect, Minus.
-* **Miscellaneous built-in functions**. These are easy to add - read this 
[blog](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html)
 for step by step instructions.
-
-##<a id="transactions"></a>Transactions##
+##<a id="transactions"></a>Transactions
 To enable full ACID transactions, a beta feature available in the 4.7.0 
release, set the <code>phoenix.transactions.enabled</code> property to true. In 
this case, you'll also need to run the transaction manager that's included in 
the distribution. Once enabled, a table may optionally be declared as 
transactional (see [here](transactions.html) for directions). Commits over 
transactional tables will have an all-or-none behavior - either all data will 
be committed (including any updates to secondary indexes) or none of it will 
(and an exception will be thrown). Both cross table and cross row transactions 
are supported. In addition, transactional tables will see their own uncommitted 
data when querying. An optimistic concurrency model is used to detect row level 
conflicts with first commit wins semantics. The later commit would produce an 
exception indicating that a conflict was detected. A transaction is started 
implicitly when a transactional table is referenced in a statement, at whi
 ch point you will not see updates from other connections until either a commit 
or rollback occurs.
 
 Non transactional tables have no guarantees above and beyond the HBase 
guarantee of row level atomicity (see 
[here](https://hbase.apache.org/acid-semantics.html)). In addition, non 
transactional tables will not see their updates until after a commit has 
occurred. The DML commands of Apache Phoenix, UPSERT VALUES, UPSERT SELECT and 
DELETE, batch pending changes to HBase tables on the client side. The changes 
are sent to the server when the transaction is committed and discarded when the 
transaction is rolled back. If auto commit is turned on for a connection, then 
Phoenix will, whenever possible, execute the entire DML command through a 
coprocessor on the server-side, so performance will improve.
@@ -109,8 +112,7 @@ queries against prior row values, since
 Timestamps may not be controlled for transactional tables. Instead, the 
transaction manager assigns timestamps which become the HBase cell timestamps 
after a commit. Timestamps still correspond to wall clock time, however they 
are multiplied by 1,000,000 to ensure enough granularity for uniqueness across 
the cluster.
 
 ##<a id="schema"></a>Schema
-
-Apache Phoenix supports table creation and versioned incremental alterations 
through DDL commands. The table metadata is stored in an HBase table.
+Apache Phoenix supports table creation and versioned incremental alterations 
through DDL commands. The table metadata is stored in an HBase table and 
versioned, such that snapshot queries over prior versions will automatically 
use the correct schema. 
 
 A Phoenix table is created through the [CREATE 
TABLE](language/index.html#create) command and can either be:
 


Reply via email to