Modified: eagle/site/docs/mapr-integration.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/mapr-integration.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/mapr-integration.html (original)
+++ eagle/site/docs/mapr-integration.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">MapR Integration</h1>
-        <p><em>Since Apache Eagle 0.4.0-incubating. Apache Eagle (incubating) 
will be called Eagle in the following.</em></p>
+        <p><em>Since Apache Eagle 0.4.0-incubating. Apache Eagle will be 
called Eagle in the following.</em></p>
 
 <h3 id="prerequisites">Prerequisites</h3>
 
@@ -235,58 +235,51 @@
 <p>First we need to enable data auditing at all three levels: cluster level, 
volume level and directory,file or table level. 
 ##### Cluster level:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>       $ maprcli 
audit data -cluster &lt;cluster name&gt; -enabled true 
+<pre><code>       $ maprcli audit data -cluster &lt;cluster name&gt; -enabled 
true 
                            [ -maxsize &lt;GB, defaut value is 32. When size of 
audit logs exceed this number, an alarm will be sent to the dashboard in the 
MapR Control Service &gt; ]
                            [ -retention &lt;number of Days&gt; ]
 </code></pre>
-</div>
 <p>Example:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli 
audit data -cluster mapr.cluster.com -enabled true -maxsize 30 -retention 30
+<pre><code>        $ maprcli audit data -cluster mapr.cluster.com -enabled 
true -maxsize 30 -retention 30
 </code></pre>
-</div>
 
 <h5 id="volume-level">Volume level:</h5>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>       $ maprcli 
volume audit -cluster &lt;cluster name&gt; -enabled true 
+<pre><code>       $ maprcli volume audit -cluster &lt;cluster name&gt; 
-enabled true 
                             -name &lt;volume name&gt;
                             [ -coalesce &lt;interval in minutes, the interval 
of time during which READ, WRITE, or GETATTR operations on one file from one 
client IP address are logged only once, if auditing is enabled&gt; ]
 </code></pre>
-</div>
 
 <p>Example:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli 
volume audit -cluster mapr.cluster.com -name mapr.tmp -enabled true
+<pre><code>        $ maprcli volume audit -cluster mapr.cluster.com -name 
mapr.tmp -enabled true
 </code></pre>
-</div>
 
 <p>To verify that auditing is enabled for a particular volume, use this 
command:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>        $ maprcli 
volume info -name &lt;volume name&gt; -json | grep -i 'audited\|coalesce'
+<pre><code>        $ maprcli volume info -name &lt;volume name&gt; -json | 
grep -i 'audited\|coalesce'
 </code></pre>
-</div>
 <p>and you should see something like this:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>                   
     "audited":1,
+<pre><code>                        "audited":1,
                         "coalesceInterval":60
 </code></pre>
-</div>
 <p>If “audited” is ‘1’ then auditing is enabled for this volume.</p>
 
 <h5 id="directory-file-or-mapr-db-table-level">Directory, file, or MapR-DB 
table level:</h5>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>        $ hadoop 
mfs -setaudit on &lt;directory|file|table&gt;
+<pre><code>        $ hadoop mfs -setaudit on &lt;directory|file|table&gt;
 </code></pre>
-</div>
 
-<p>To check whether Auditing is Enabled for a Directory, File, or MapR-DB 
Table, use <code class="highlighter-rouge">$ hadoop mfs -ls</code>
+<p>To check whether Auditing is Enabled for a Directory, File, or MapR-DB 
Table, use <code>$ hadoop mfs -ls</code>
 Example:
-Before enable the audit log on file <code 
class="highlighter-rouge">/tmp/dir</code>, try <code 
class="highlighter-rouge">$ hadoop mfs -ls /tmp/dir</code>, you should see 
something like this:
+Before enable the audit log on file <code>/tmp/dir</code>, try <code>$ hadoop 
mfs -ls /tmp/dir</code>, you should see something like this:
 ~~~
 drwxr-xr-x Z U U   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
                p 2050.32.131328  mapr2.da.dg:5660 mapr1.da.dg:5660
 ~~~
-The second <code class="highlighter-rouge">U</code> means auditing on this 
file is not enabled. 
+The second <code>U</code> means auditing on this file is not enabled. 
 Enable auditing with this command: 
 ~~~
 $ hadoop mfs -setaudit on /tmp/dir
@@ -300,7 +293,7 @@ you should see something like this:
 drwxr-xr-x Z U A   - root root          0 2016-03-02 15:02  268435456 /tmp/dir
                p 2050.32.131328  mapr2.da.dg:5660 mapr1.da.dg:5660
 ~~~
-We can see the previous <code class="highlighter-rouge">U</code> has been 
changed to <code class="highlighter-rouge">A</code> which indicates auditing on 
this file is enabled.</p>
+We can see the previous <code>U</code> has been changed to <code>A</code> 
which indicates auditing on this file is enabled.</p>
 
 <h6 id="important">Important:</h6>
 <p>When a directory has been enabled auditing,  directories/files located in 
this dir won’t inherit auditing, but a newly created file/dir (after enabling 
the auditing on this dir) in this directory will.</p>
@@ -308,11 +301,11 @@ We can see the previous <code class="hig
 <h4 id="step2-stream-log-data-into-kafka-by-using-logstash">Step2: Stream log 
data into Kafka by using Logstash</h4>
 <p>As MapR do not have name node, instead it use CLDB service, we have to use 
logstash to stream log data into Kafka.
 - First find out the nodes that have CLDB service
-- Then find out the location of audit log files, eg: <code 
class="highlighter-rouge">/mapr/mapr.cluster.com/var/mapr/local/mapr1.da.dg/audit/</code>,
 file names should be in this format: <code 
class="highlighter-rouge">FSAudit.log-2016-05-04-001.json</code> 
-- Created a logstash conf file and run it, following this doc<a 
href="https://github.com/apache/incubator-eagle/blob/dev/eagle-assembly/src/main/docs/logstash-kafka-conf.md";>Logstash-kafka</a></p>
+- Then find out the location of audit log files, eg: 
<code>/mapr/mapr.cluster.com/var/mapr/local/mapr1.da.dg/audit/</code>, file 
names should be in this format: <code>FSAudit.log-2016-05-04-001.json</code> 
+- Created a logstash conf file and run it, following this doc<a 
href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/docs/logstash-kafka-conf.md";>Logstash-kafka</a></p>
 
 <h4 id="step3-set-up-maprfsauditlog-applicaiton-in-eagle-service">Step3: Set 
up maprFSAuditLog applicaiton in Eagle Service</h4>
-<p>After Eagle Service gets started, create mapFSAuditLog application using:  
<code class="highlighter-rouge">$ ./maprFSAuditLog-init.sh</code>. By default 
it will create maprFSAuditLog in site “sandbox”, you may need to change it 
to your own site.
+<p>After Eagle Service gets started, create mapFSAuditLog application using:  
<code>$ ./maprFSAuditLog-init.sh</code>. By default it will create 
maprFSAuditLog in site “sandbox”, you may need to change it to your own 
site.
 After these steps you are good to go.</p>
 
 <p>Have fun!!! :)</p>

Modified: eagle/site/docs/metadata-api.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/metadata-api.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/metadata-api.html (original)
+++ eagle/site/docs/metadata-api.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Policy API</h1>
-        <p>Apache Eagle (incubating) Provide RESTful APIs for 
create/update/query/delete policy for alert</p>
+        <p>Apache Eagle Provide RESTful APIs for create/update/query/delete 
policy for alert</p>
 
 <ul>
   <li>Policy Definition API</li>

Modified: eagle/site/docs/quick-start-0.3.0.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/quick-start-0.3.0.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/quick-start-0.3.0.html (original)
+++ eagle/site/docs/quick-start-0.3.0.html Tue Jan  3 01:19:05 2017
@@ -218,7 +218,7 @@
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Quick Start</h1>
         <p>Guide To Install Apache Eagle 0.3.0-incubating to Hortonworks 
sandbox.<br />
-<em>Apache Eagle (incubating) will be called Eagle in the following.</em></p>
+<em>Apache Eagle will be called Eagle in the following.</em></p>
 
 <ul>
   <li>Prerequisite</li>
@@ -235,26 +235,25 @@
 
 <h3 id="download--patch--build"><strong>Download + Patch + Build</strong></h3>
 <ul>
-  <li>Download Eagle 0.3.0 source released From Apache <a 
href="http://www-us.apache.org/dist/incubator/eagle/apache-eagle-0.3.0-incubating/apache-eagle-0.3.0-incubating-src.tar.gz";>[Tar]</a>
 , <a 
href="http://www-us.apache.org/dist/incubator/eagle/apache-eagle-0.3.0-incubating/apache-eagle-0.3.0-incubating-src.tar.gz.md5";>[MD5]</a></li>
+  <li>Download Eagle 0.3.0 source released From Apache <a 
href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.3.0-incubating/apache-eagle-0.3.0-incubating-src.tar.gz";>[Tar]</a>
 , <a 
href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.3.0-incubating/apache-eagle-0.3.0-incubating-src.tar.gz.md5";>[MD5]</a></li>
   <li>
     <p>Build manually with <a href="https://maven.apache.org/";>Apache 
Maven</a>:</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>$ tar -zxvf 
apache-eagle-0.3.0-incubating-src.tar.gz
+    <pre><code>$ tar -zxvf apache-eagle-0.3.0-incubating-src.tar.gz
 $ cd incubator-eagle-release-0.3.0-rc3  
-$ curl -O 
https://patch-diff.githubusercontent.com/raw/apache/incubator-eagle/pull/180.patch
+$ curl -O 
https://patch-diff.githubusercontent.com/raw/apache/eagle/pull/180.patch
 $ git apply 180.patch
 $ mvn clean package -DskipTests
 </code></pre>
-    </div>
 
-    <p>After building successfully, you will get tarball under <code 
class="highlighter-rouge">eagle-assembly/target/</code> named as <code 
class="highlighter-rouge">eagle-0.3.0-incubating-bin.tar.gz</code>
+    <p>After building successfully, you will get tarball under 
<code>eagle-assembly/target/</code> named as 
<code>eagle-0.3.0-incubating-bin.tar.gz</code>
 <br /></p>
   </li>
 </ul>
 
 <h3 id="install-eagle"><strong>Install Eagle</strong></h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code> $ scp -P 2222  
eagle-assembly/target/eagle-0.3.0-incubating-bin.tar.gz [email protected]:/root/
+<pre><code> $ scp -P 2222  
eagle-assembly/target/eagle-0.3.0-incubating-bin.tar.gz [email protected]:/root/
  $ ssh [email protected] -p 2222 (password is hadoop)
  $ tar -zxvf eagle-0.3.0-incubating-bin.tar.gz
  $ mv eagle-0.3.0-incubating eagle
@@ -262,7 +261,6 @@ $ mvn clean package -DskipTests
  $ cd /usr/hdp/current/eagle
  $ examples/eagle-sandbox-starter.sh
 </code></pre>
-</div>
 
 <p><br /></p>
 

Modified: eagle/site/docs/quick-start.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/quick-start.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/quick-start.html (original)
+++ eagle/site/docs/quick-start.html Tue Jan  3 01:19:05 2017
@@ -236,26 +236,25 @@
 
 <h3 id="download--patch--build"><strong>Download + Patch + Build</strong></h3>
 <ul>
-  <li>Download latest Eagle source released From Apache <a 
href="http://www-us.apache.org/dist/incubator/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz";>[Tar]</a>,
 <a 
href="http://www-us.apache.org/dist/incubator/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.md5";>[MD5]</a>.</li>
+  <li>Download latest Eagle source released From Apache <a 
href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz";>[Tar]</a>,
 <a 
href="https://dist.apache.org/repos/dist/release/eagle/apache-eagle-0.4.0-incubating/apache-eagle-0.4.0-incubating-src.tar.gz.md5";>[MD5]</a>.</li>
   <li>
     <p>Build manually with <a href="https://maven.apache.org/";>Apache 
Maven</a>:</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>$ tar -zxvf 
apache-eagle-0.4.0-incubating-src.tar.gz
+    <pre><code>$ tar -zxvf apache-eagle-0.4.0-incubating-src.tar.gz
 $ cd apache-eagle-0.4.0-incubating-src 
-$ curl -O 
https://patch-diff.githubusercontent.com/raw/apache/incubator-eagle/pull/268.patch
+$ curl -O 
https://patch-diff.githubusercontent.com/raw/apache/eagle/pull/268.patch
 $ git apply 268.patch
 $ mvn clean package -DskipTests
 </code></pre>
-    </div>
 
-    <p>After building successfully, you will get a tarball under <code 
class="highlighter-rouge">eagle-assembly/target/</code> named <code 
class="highlighter-rouge">apache-eagle-0.4.0-incubating-bin.tar.gz</code>
+    <p>After building successfully, you will get a tarball under 
<code>eagle-assembly/target/</code> named 
<code>apache-eagle-0.4.0-incubating-bin.tar.gz</code>
 <br /></p>
   </li>
 </ul>
 
 <h3 id="install-eagle"><strong>Install Eagle</strong></h3>
 
-<div class="highlighter-rouge"><pre class="highlight"><code> $ scp -P 2222 
eagle-assembly/target/apache-eagle-0.4.0-incubating-bin.tar.gz 
[email protected]:/root/
+<pre><code> $ scp -P 2222 
eagle-assembly/target/apache-eagle-0.4.0-incubating-bin.tar.gz 
[email protected]:/root/
  $ ssh [email protected] -p 2222 (password is hadoop)
  $ tar -zxvf apache-eagle-0.4.0-incubating-bin.tar.gz
  $ mv apache-eagle-0.4.0-incubating eagle
@@ -263,22 +262,20 @@ $ mvn clean package -DskipTests
  $ cd /usr/hdp/current/eagle
  $ examples/eagle-sandbox-starter.sh
 </code></pre>
-</div>
 
 <p><br /></p>
 
 <h3 
id="sample-application-hive-query-activity-monitoring-in-sandbox"><strong>Sample
 Application: Hive query activity monitoring in sandbox</strong></h3>
-<p>After executing <code 
class="highlighter-rouge">examples/eagle-sandbox-starter.sh</code>, you have a 
sample application (topology) running on the Apache Storm (check with <a 
href="http://sandbox.hortonworks.com:8744/index.html";>storm ui</a>), and a 
sample policy of Hive activity monitoring defined.</p>
+<p>After executing <code>examples/eagle-sandbox-starter.sh</code>, you have a 
sample application (topology) running on the Apache Storm (check with <a 
href="http://sandbox.hortonworks.com:8744/index.html";>storm ui</a>), and a 
sample policy of Hive activity monitoring defined.</p>
 
 <p>Next you can trigger an alert by running a Hive query.</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>$ su hive
+<pre><code>$ su hive
 $ hive
 $ set hive.execution.engine=mr;
 $ use xademo;
 $ select a.phone_number from customer_details a, call_detail_records b where 
a.phone_number=b.phone_number;
 </code></pre>
-</div>
 <p><br /></p>
 
 <hr />

Modified: eagle/site/docs/serviceconfiguration.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/serviceconfiguration.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/serviceconfiguration.html (original)
+++ eagle/site/docs/serviceconfiguration.html Tue Jan  3 01:19:05 2017
@@ -3,7 +3,7 @@
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge">
 
-       <title>Eagle - Apache Eagle (incubating) Service Configuration</title>
+       <title>Eagle - Apache Eagle Service Configuration</title>
        <meta name="description" content="Eagle - Analyze Big Data Platforms 
for Security and Performance">
 
        <meta name="keywords" content="Eagle, Hadoop, Security, Real Time">
@@ -216,8 +216,8 @@
         </ul>
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
-        <h1 class="page-header" style="margin-top: 0px">Apache Eagle 
(incubating) Service Configuration</h1>
-        <p>Apache Eagle (incubating, called Eagle in the following) Service 
provides some config files for specifying metadata storage, security access to 
Eagle Service. This page will give detailed
+        <h1 class="page-header" style="margin-top: 0px">Apache Eagle Service 
Configuration</h1>
+        <p>Apache Eagle (called Eagle in the following) Service provides some 
config files for specifying metadata storage, security access to Eagle Service. 
This page will give detailed
 description of Eagle Service configuration.</p>
 
 <p>Eagle currently supports to customize the following configurations:</p>
@@ -232,7 +232,7 @@ description of Eagle Service configurati
   <li>for hbase</li>
 </ul>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
+<pre><code>eagle {
        service{
                storage-type="hbase"
                hbase-zookeeper-quorum="sandbox.hortonworks.com"
@@ -243,13 +243,12 @@ description of Eagle Service configurati
        }
       }
 </code></pre>
-</div>
 
 <ul>
   <li>for mysql</li>
 </ul>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
+<pre><code>eagle {
        service {
                storage-type="jdbc"
                storage-adapter="mysql"
@@ -263,13 +262,12 @@ description of Eagle Service configurati
        }
 }
 </code></pre>
-</div>
 
 <ul>
   <li>for derby</li>
 </ul>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>eagle {
+<pre><code>eagle {
        service {
                storage-type="jdbc"
                storage-adapter="derby"
@@ -283,7 +281,6 @@ description of Eagle Service configurati
        }
 }
 </code></pre>
-</div>
 <p><br /></p>
 
       </div><!--end of loadcontent-->  

Modified: eagle/site/docs/terminology.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/terminology.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/terminology.html (original)
+++ eagle/site/docs/terminology.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Terminology</h1>
-        <p>Here are some terms we are using in Apache Eagle (incubating, 
called Eagle in the following), please check them for your reference.
+        <p>Here are some terms we are using in Apache Eagle (called Eagle in 
the following), please check them for your reference.
 They are basic knowledge of Eagle which also will help to well understand 
Eagle.</p>
 
 <ul>

Modified: eagle/site/docs/tutorial/classification-0.3.0.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/classification-0.3.0.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/classification-0.3.0.html (original)
+++ eagle/site/docs/tutorial/classification-0.3.0.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Data Classification 
Tutorial</h1>
-        <p>Apache Eagle (incubating) data classification feature provides the 
ability to classify data with different levels of sensitivity.
+        <p>Apache Eagle data classification feature provides the ability to 
classify data with different levels of sensitivity.
 Currently this feature is available ONLY for applications monitoring HDFS, 
Apache Hive and Apache HBase. For example, HdfsAuditLog, HiveQueryLog and 
HBaseSecurityLog.</p>
 
 <p>The main content of this page are</p>

Modified: eagle/site/docs/tutorial/classification.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/classification.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/classification.html (original)
+++ eagle/site/docs/tutorial/classification.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Data Classification 
Tutorial</h1>
-        <p>Apache Eagle (incubating) data classification feature provides the 
ability to classify data with different levels of sensitivity.
+        <p>Apache Eagle data classification feature provides the ability to 
classify data with different levels of sensitivity.
 Currently this feature is available ONLY for applications monitoring HDFS, 
Hive<sup id="fnref:HIVE"><a href="#fn:HIVE" class="footnote">1</a></sup> and 
HBase<sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">2</a></sup>. 
For example, HdfsAuditLog, HiveQueryLog and HBaseSecurityLog.</p>
 
 <p>The main content of this page are</p>
@@ -243,33 +243,30 @@ Currently this feature is available ONLY
 
         <p>You may configure the default path for Apache Hadoop clients to 
connect remote hdfs namenode.</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020
+        <pre><code>  
classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020
 </code></pre>
-        </div>
       </li>
       <li>
         <p>HA case</p>
 
         <p>Basically, you point your fs.defaultFS at your nameservice and let 
the client know how its configured (the backing namenodes) and how to fail over 
between them under the HA mode</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.fs.defaultFS=hdfs://nameservice1
+        <pre><code>  classification.fs.defaultFS=hdfs://nameservice1
   classification.dfs.nameservices=nameservice1
   classification.dfs.ha.namenodes.nameservice1=namenode1,namenode2
   
classification.dfs.namenode.rpc-address.nameservice1.namenode1=hadoopnamenode01:8020
   
classification.dfs.namenode.rpc-address.nameservice1.namenode2=hadoopnamenode02:8020
   
classification.dfs.client.failover.proxy.provider.nameservice1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 </code></pre>
-        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>For Kerberos-secured cluster, you need to get a keytab file and the 
principal from your admin, and configure “eagle.keytab.file” and 
“eagle.kerberos.principal” to authenticate its access.</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
+        <pre><code>  
classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
   [email protected]
 </code></pre>
-        </div>
 
         <p>If there is an exception about “invalid server principal name”, 
you may need to check the DNS resolver, or the data transfer , such as 
“dfs.encrypt.data.transfer”, “dfs.encrypt.data.transfer.algorithm”, 
“dfs.trustedchannel.resolver.class”, 
“dfs.datatransfer.client.encrypt”.</p>
       </li>
@@ -280,13 +277,12 @@ Currently this feature is available ONLY
       <li>
         <p>Basic</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.accessType=metastoredb_jdbc
+        <pre><code>  classification.accessType=metastoredb_jdbc
   classification.password=hive
   classification.user=hive
   classification.jdbcDriverClassName=com.mysql.jdbc.Driver
   
classification.jdbcUrl=jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true
 </code></pre>
-        </div>
       </li>
     </ul>
   </li>
@@ -299,17 +295,16 @@ Currently this feature is available ONLY
 
         <p>You need to sett “hbase.zookeeper.quorum”:”localhost” 
property and “hbase.zookeeper.property.clientPort” property.</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.hbase.zookeeper.property.clientPort=2181
+        <pre><code>  classification.hbase.zookeeper.property.clientPort=2181
   classification.hbase.zookeeper.quorum=localhost
 </code></pre>
-        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>According to your environment, you can add or remove some of the 
following properties. Here is the reference.</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.hbase.zookeeper.property.clientPort=2181
+        <pre><code>  classification.hbase.zookeeper.property.clientPort=2181
   classification.hbase.zookeeper.quorum=localhost
   classification.hbase.security.authentication=kerberos
   classification.hbase.master.kerberos.principal=hadoop/[email protected]
@@ -317,7 +312,6 @@ Currently this feature is available ONLY
   classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
   [email protected]
 </code></pre>
-        </div>
       </li>
     </ul>
   </li>

Modified: eagle/site/docs/tutorial/ldap.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/ldap.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/ldap.html (original)
+++ eagle/site/docs/tutorial/ldap.html Tue Jan  3 01:19:05 2017
@@ -3,7 +3,7 @@
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge">
 
-       <title>Eagle - Apache Eagle (incubating) LDAP Tutorial</title>
+       <title>Eagle - Apache Eagle LDAP Tutorial</title>
        <meta name="description" content="Eagle - Analyze Big Data Platforms 
for Security and Performance">
 
        <meta name="keywords" content="Eagle, Hadoop, Security, Real Time">
@@ -216,12 +216,12 @@
         </ul>
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
-        <h1 class="page-header" style="margin-top: 0px">Apache Eagle 
(incubating) LDAP Tutorial</h1>
-        <p>To enable Apache Eagle (incubating, called Eagle in the following) 
LDAP authentication on the web, two steps are needed.</p>
+        <h1 class="page-header" style="margin-top: 0px">Apache Eagle LDAP 
Tutorial</h1>
+        <p>To enable Apache Eagle (called Eagle in the following) LDAP 
authentication on the web, two steps are needed.</p>
 
 <p>Step 1: edit configuration under conf/ldap.properties.</p>
 
-<div class="highlighter-rouge"><pre 
class="highlight"><code>ldap.server=ldap://localhost:10389
+<pre><code>ldap.server=ldap://localhost:10389
 ldap.username=uid=admin,ou=system
 ldap.password=secret
 ldap.user.searchBase=ou=Users,o=mojo
@@ -230,13 +230,12 @@ ldap.user.groupSearchBase=ou=groups,o=mo
 acl.adminRole=
 acl.defaultRole=ROLE_USER
 </code></pre>
-</div>
 
 <p>acl.adminRole and acl.defaultRole are two customized properties for Eagle. 
Eagle manages admin users with groups. If you set acl.adminRole as 
ROLE_{EAGLE-ADMIN-GROUP-NAME}, members in this group have the admin privilege. 
acl.defaultRole is ROLE_USER.</p>
 
 <p>Step 2: edit conf/eagle-service.conf, and add 
springActiveProfile=”default”</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>eagle{
+<pre><code>eagle{
     service{
         storage-type="hbase"
         hbase-zookeeper-quorum="localhost"
@@ -246,7 +245,6 @@ acl.defaultRole=ROLE_USER
     }
 }
 </code></pre>
-</div>
 
 
       </div><!--end of loadcontent-->  

Modified: eagle/site/docs/tutorial/notificationplugin.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/notificationplugin.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/notificationplugin.html (original)
+++ eagle/site/docs/tutorial/notificationplugin.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Notification 
Plugin</h1>
-        <p><em>Since Apache Eagle 0.4.0-incubating. Apache Eagle (incubating) 
will be called Eagle in the following.</em></p>
+        <p><em>Since Apache Eagle 0.4.0-incubating. Apache Eagle will be 
called Eagle in the following.</em></p>
 
 <h3 id="eagle-notification-plugins">Eagle Notification Plugins</h3>
 
@@ -249,7 +249,7 @@
 
 <p>To integrate a customized notification plugin, we must implement an 
interface</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code>public interface 
NotificationPlugin {
+<pre><code>public interface NotificationPlugin {
 /**
  * for initialization
  * @throws Exception
@@ -279,28 +279,26 @@ void onAlert(AlertAPIEntity alertEntity)
 List&lt;NotificationStatus&gt; getStatusList();
 } Examples: AlertKafkaPlugin, AlertEmailPlugin, and AlertEagleStorePlugin.
 </code></pre>
-</div>
 
 <p>The second and crucial step is to register the configurations of the 
customized plugin. In other words, we need persist the configuration template 
into database in order to expose the configurations to users in the front 
end.</p>
 
 <p>Examples:</p>
 
-<div class="highlighter-rouge"><pre class="highlight"><code><span 
class="p">{</span><span class="w">
-   </span><span class="nt">"prefix"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"alertNotifications"</span><span 
class="p">,</span><span class="w">
-   </span><span class="nt">"tags"</span><span class="p">:</span><span 
class="w"> </span><span class="p">{</span><span class="w">
-     </span><span class="nt">"notificationType"</span><span 
class="p">:</span><span class="w"> </span><span class="s2">"kafka"</span><span 
class="w">
-   </span><span class="p">},</span><span class="w">
-   </span><span class="nt">"className"</span><span class="p">:</span><span 
class="w"> </span><span 
class="s2">"org.apache.eagle.notification.plugin.AlertKafkaPlugin"</span><span 
class="p">,</span><span class="w">
-   </span><span class="nt">"description"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"send alert to kafka bus"</span><span 
class="p">,</span><span class="w">
-   </span><span class="nt">"enabled"</span><span class="p">:</span><span 
class="kc">true</span><span class="p">,</span><span class="w">
-   </span><span class="nt">"fields"</span><span class="p">:</span><span 
class="w"> </span><span 
class="s2">"[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]"</span><span
 class="w">
-</span><span class="p">}</span><span class="w">
-</span></code></pre>
-</div>
+<pre><code>{
+   "prefix": "alertNotifications",
+   "tags": {
+     "notificationType": "kafka"
+   },
+   "className": "org.apache.eagle.notification.plugin.AlertKafkaPlugin",
+   "description": "send alert to kafka bus",
+   "enabled":true,
+   "fields": 
"[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]"
+}
+</code></pre>
 
-<p><strong>Note</strong>: <code class="highlighter-rouge">fields</code> is the 
configuration for notification type <code 
class="highlighter-rouge">kafka</code></p>
+<p><strong>Note</strong>: <code>fields</code> is the configuration for 
notification type <code>kafka</code></p>
 
-<p>How can we do that? <a 
href="https://github.com/apache/incubator-eagle/blob/master/eagle-assembly/src/main/bin/eagle-topology-init.sh";>Here</a>
 are Eagle other notification plugin configurations. Just append yours to it, 
and run this script when Eagle service is up.</p>
+<p>How can we do that? <a 
href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/bin/eagle-topology-init.sh";>Here</a>
 are Eagle other notification plugin configurations. Just append yours to it, 
and run this script when Eagle service is up.</p>
 
 <hr />
 

Modified: eagle/site/docs/tutorial/policy-capabilities.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/policy-capabilities.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/policy-capabilities.html (original)
+++ eagle/site/docs/tutorial/policy-capabilities.html Tue Jan  3 01:19:05 2017
@@ -219,7 +219,7 @@
         <h1 class="page-header" style="margin-top: 0px">Policy Engine 
Capabilities</h1>
         <h3 id="cep-as-first-class-policy-engine">CEP as first class policy 
engine</h3>
 
-<p>Apache Eagle (incubating, called Eagle in the following) platform supports 
CEP engine as first class policy engine, i.e. Eagle platform runs CEP engine on 
top of Apache Storm and make rules be hot deployed.<br />
+<p>Apache Eagle (called Eagle in the following) platform supports CEP engine 
as first class policy engine, i.e. Eagle platform runs CEP engine on top of 
Apache Storm and make rules be hot deployed.<br />
 Specifically Eagle platform uses WSO2 Siddhi CEP library, source code is <a 
href="https://github.com/wso2/siddhi";>here</a>.</p>
 
 <h3 id="policy-capabilities">Policy capabilities</h3>

Modified: eagle/site/docs/tutorial/policy.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/policy.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/policy.html (original)
+++ eagle/site/docs/tutorial/policy.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Policy Tutorial</h1>
-        <p>Apache Eagle (incubating, called Eagle in the following) currently 
supports to customize policies for data sources for each site:</p>
+        <p>Apache Eagle (called Eagle in the following) currently supports to 
customize policies for data sources for each site:</p>
 
 <ul>
   <li>HDFS Audit Log</li>
@@ -244,13 +244,12 @@
   <li>
     <p><strong>Step 2</strong>: Eagle supports a variety of properties for 
match critera where users can set different values. Eagle also supports window 
functions to extend policies with time functions.</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>command = 
delete 
+    <pre><code>command = delete 
 (Eagle currently supports the following commands open, delete, copy, append, 
copy from local, get, move, mkdir, create, list, change permissions)
        
 source = /tmp/private 
 (Eagle supports wildcarding for property values for example /tmp/*)
 </code></pre>
-    </div>
 
     <p><img src="/images/docs/hdfs-policy2.png" alt="HDFS Policies" /></p>
   </li>
@@ -277,13 +276,12 @@ source = /tmp/private
   <li>
     <p><strong>Step 2</strong>: Eagle support a variety of properties for 
match critera where users can set different values. Eagle also supports window 
functions to extend policies with time functions.</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>command = 
Select 
+    <pre><code>command = Select 
 (Eagle currently supports the following commands DDL statements Create, Drop, 
Alter, Truncate, Show)
        
 sensitivity type = PHONE_NUMBER
 (Eagle supports classifying data in Hive with different sensitivity types. 
Users can use these sensitivity types to create policies)
 </code></pre>
-    </div>
 
     <p><img src="/images/docs/hive-policy2.png" alt="Hive Policies" /></p>
   </li>

Modified: eagle/site/docs/tutorial/site-0.3.0.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/site-0.3.0.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/site-0.3.0.html (original)
+++ eagle/site/docs/tutorial/site-0.3.0.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Site Management</h1>
-        <p>Apache Eagle (incubating, called Eagle in the following) identifies 
different Hadoop<sup id="fnref:HADOOP"><a href="#fn:HADOOP" 
class="footnote">1</a></sup> environments as different sites, such as sandbox, 
datacenter1, datacenter2. In each site, a user can add different data sources 
as the monitoring targets. For each data source, a connection configuration is 
required.</p>
+        <p>Apache Eagle (called Eagle in the following) identifies different 
Hadoop<sup id="fnref:HADOOP"><a href="#fn:HADOOP" class="footnote">1</a></sup> 
environments as different sites, such as sandbox, datacenter1, datacenter2. In 
each site, a user can add different data sources as the monitoring targets. For 
each data source, a connection configuration is required.</p>
 
 <h4 id="step-1-add-site">Step 1: Add Site</h4>
 
@@ -241,35 +241,32 @@ Here we give configuration examples for
 
         <p>You may configure the default path for Hadoop clients to connect 
remote hdfs namenode.</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span 
class="nt">"fs.defaultFS"</span><span class="p">:</span><span 
class="s2">"hdfs://sandbox.hortonworks.com:8020"</span><span 
class="p">}</span><span class="w">
-</span></code></pre>
-        </div>
+        <pre><code>  {"fs.defaultFS":"hdfs://sandbox.hortonworks.com:8020"}
+</code></pre>
       </li>
       <li>
         <p>HA case</p>
 
         <p>Basically, you point your fs.defaultFS at your nameservice and let 
the client know how its configured (the backing namenodes) and how to fail over 
between them under the HA mode</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span 
class="nt">"fs.defaultFS"</span><span class="p">:</span><span 
class="s2">"hdfs://nameservice1"</span><span class="p">,</span><span class="w">
-   </span><span class="nt">"dfs.nameservices"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"nameservice1"</span><span class="p">,</span><span class="w">
-   </span><span class="nt">"dfs.ha.namenodes.nameservice1"</span><span 
class="p">:</span><span class="s2">"namenode1,namenode2"</span><span 
class="p">,</span><span class="w">
-   </span><span 
class="nt">"dfs.namenode.rpc-address.nameservice1.namenode1"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"hadoopnamenode01:8020"</span><span class="p">,</span><span 
class="w">
-   </span><span 
class="nt">"dfs.namenode.rpc-address.nameservice1.namenode2"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"hadoopnamenode02:8020"</span><span class="p">,</span><span 
class="w">
-   </span><span 
class="nt">"dfs.client.failover.proxy.provider.nameservice1"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"</span><span
 class="w">
-  </span><span class="p">}</span><span class="w">
-</span></code></pre>
-        </div>
+        <pre><code>  {"fs.defaultFS":"hdfs://nameservice1",
+   "dfs.nameservices": "nameservice1",
+   "dfs.ha.namenodes.nameservice1":"namenode1,namenode2",
+   "dfs.namenode.rpc-address.nameservice1.namenode1": "hadoopnamenode01:8020",
+   "dfs.namenode.rpc-address.nameservice1.namenode2": "hadoopnamenode02:8020",
+   "dfs.client.failover.proxy.provider.nameservice1": 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+  }
+</code></pre>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>For Kerberos-secured cluster, you need to get a keytab file and the 
principal from your admin, and configure “eagle.keytab.file” and 
“eagle.kerberos.principal” to authenticate its access.</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span class="w"> </span><span 
class="nt">"eagle.keytab.file"</span><span class="p">:</span><span 
class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span 
class="p">,</span><span class="w">
-    </span><span class="nt">"eagle.kerberos.principal"</span><span 
class="p">:</span><span class="s2">"[email protected]"</span><span class="w">
-  </span><span class="p">}</span><span class="w">
-</span></code></pre>
-        </div>
+        <pre><code>  { "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab",
+    "eagle.kerberos.principal":"[email protected]"
+  }
+</code></pre>
 
         <p>If there is an exception about “invalid server principal name”, 
you may need to check the DNS resolver, or the data transfer , such as 
“dfs.encrypt.data.transfer”, “dfs.encrypt.data.transfer.algorithm”, 
“dfs.trustedchannel.resolver.class”, 
“dfs.datatransfer.client.encrypt”.</p>
       </li>
@@ -280,15 +277,14 @@ Here we give configuration examples for
       <li>
         <p>Basic</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span class="w">
-    </span><span class="nt">"accessType"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"metastoredb_jdbc"</span><span 
class="p">,</span><span class="w">
-    </span><span class="nt">"password"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span 
class="w">
-    </span><span class="nt">"user"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span 
class="w">
-    </span><span class="nt">"jdbcDriverClassName"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"com.mysql.jdbc.Driver"</span><span class="p">,</span><span 
class="w">
-    </span><span class="nt">"jdbcUrl"</span><span class="p">:</span><span 
class="w"> </span><span 
class="s2">"jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true"</span><span
 class="w">
-  </span><span class="p">}</span><span class="w">
-</span></code></pre>
-        </div>
+        <pre><code>  {
+    "accessType": "metastoredb_jdbc",
+    "password": "hive",
+    "user": "hive",
+    "jdbcDriverClassName": "com.mysql.jdbc.Driver",
+    "jdbcUrl": 
"jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true"
+  }
+</code></pre>
       </li>
     </ul>
   </li>
@@ -301,29 +297,27 @@ Here we give configuration examples for
 
         <p>You need to sett “hbase.zookeeper.quorum”:”localhost” 
property and “hbase.zookeeper.property.clientPort” property.</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span class="w">
-      </span><span 
class="nt">"hbase.zookeeper.property.clientPort"</span><span 
class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span 
class="w">
-      </span><span class="nt">"hbase.zookeeper.quorum"</span><span 
class="p">:</span><span class="s2">"localhost"</span><span class="w">
-  </span><span class="p">}</span><span class="w">
-</span></code></pre>
-        </div>
+        <pre><code>  {
+      "hbase.zookeeper.property.clientPort":"2181",
+      "hbase.zookeeper.quorum":"localhost"
+  }
+</code></pre>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>According to your environment, you can add or remove some of the 
following properties. Here is the reference.</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span class="w">
-      </span><span 
class="nt">"hbase.zookeeper.property.clientPort"</span><span 
class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span 
class="w">
-      </span><span class="nt">"hbase.zookeeper.quorum"</span><span 
class="p">:</span><span class="s2">"localhost"</span><span 
class="p">,</span><span class="w">
-      </span><span class="nt">"hbase.security.authentication"</span><span 
class="p">:</span><span class="s2">"kerberos"</span><span 
class="p">,</span><span class="w">
-      </span><span class="nt">"hbase.master.kerberos.principal"</span><span 
class="p">:</span><span class="s2">"hadoop/[email protected]"</span><span 
class="p">,</span><span class="w">
-      </span><span class="nt">"zookeeper.znode.parent"</span><span 
class="p">:</span><span class="s2">"/hbase"</span><span class="p">,</span><span 
class="w">
-      </span><span class="nt">"eagle.keytab.file"</span><span 
class="p">:</span><span 
class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span 
class="p">,</span><span class="w">
-      </span><span class="nt">"eagle.kerberos.principal"</span><span 
class="p">:</span><span class="s2">"[email protected]"</span><span class="w">
-  </span><span class="p">}</span><span class="w">
-</span></code></pre>
-        </div>
+        <pre><code>  {
+      "hbase.zookeeper.property.clientPort":"2181",
+      "hbase.zookeeper.quorum":"localhost",
+      "hbase.security.authentication":"kerberos",
+      "hbase.master.kerberos.principal":"hadoop/[email protected]",
+      "zookeeper.znode.parent":"/hbase",
+      "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab",
+      "eagle.kerberos.principal":"[email protected]"
+  }
+</code></pre>
       </li>
     </ul>
   </li>

Modified: eagle/site/docs/tutorial/topologymanagement.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/topologymanagement.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/topologymanagement.html (original)
+++ eagle/site/docs/tutorial/topologymanagement.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">Topology 
Management</h1>
-        <p><em>Since Apache Eagle 0.4.0-incubating. Apache Eagle (incubating) 
will be called Eagle in the following.</em></p>
+        <p><em>Since Apache Eagle 0.4.0-incubating. Apache Eagle will be 
called Eagle in the following.</em></p>
 
 <blockquote>
   <p>Application manager aims to manage applications on EAGLE UI. Users can 
easily start/start topologies remotely or locally without any shell commands. 
At the same, it should be capable to sync the latest status of topologies on 
the execution platform (e.g., Storm<sup id="fnref:STORM"><a href="#fn:STORM" 
class="footnote">1</a></sup> cluster).</p>
@@ -229,7 +229,7 @@
 <p>Application manager consists of a daemon scheduler and an execution module. 
The scheduler periodically loads user operations(start/stop) from database, and 
the execution module executes these operations. For more details, please refer 
to <a 
href="https://cwiki.apache.org/confluence/display/EAG/Application+Management";>here</a>.</p>
 
 <h3 id="configurations">Configurations</h3>
-<p>The configuration file <code 
class="highlighter-rouge">eagle-scheduler.conf</code> defines scheduler 
parameters, execution platform settings and parts of default topology 
configuration.</p>
+<p>The configuration file <code>eagle-scheduler.conf</code> defines scheduler 
parameters, execution platform settings and parts of default topology 
configuration.</p>
 
 <ul>
   <li>
@@ -323,7 +323,7 @@
   <li>
     <p>Editing eagle-scheduler.conf, and start Eagle service</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code> # enable 
application manager       
+    <pre><code> # enable application manager       
  appCommandLoaderEnabled = true
     
  # provide jar path
@@ -333,10 +333,9 @@
  envContextConfig.url = "http://sandbox.hortonworks.com:8744";
  envContextConfig.nimbusHost = "sandbox.hortonworks.com"
 </code></pre>
-    </div>
 
     <p>For more configurations, please back to <a 
href="/docs/configuration.html">Application Configuration</a>. <br />
- After the configuration is ready, start Eagle service <code 
class="highlighter-rouge">bin/eagle-service.sh start</code>.</p>
+ After the configuration is ready, start Eagle service 
<code>bin/eagle-service.sh start</code>.</p>
   </li>
   <li>
     <p>Go to admin page 
@@ -347,7 +346,7 @@
     <ul>
       <li>name: topology name</li>
       <li>type: topology type [CLASS, DYNAMIC]</li>
-      <li>execution entry: either the class which implements interface 
TopologyExecutable or eagle <a 
href="https://github.com/apache/incubator-eagle/blob/master/eagle-assembly/src/main/conf/sandbox-hadoopjmx-pipeline.conf";>DSL</a>
 based topology definition
+      <li>execution entry: either the class which implements interface 
TopologyExecutable or eagle <a 
href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/conf/sandbox-hadoopjmx-pipeline.conf";>DSL</a>
 based topology definition
 <img src="/images/appManager/topology-description.png" 
alt="topology-description" /></li>
     </ul>
   </li>
@@ -358,11 +357,11 @@
   <li>
     <p>Go to site page, and add topology configurations.</p>
 
-    <p><strong>NOTICE</strong> topology configurations defined here are 
REQUIRED an extra prefix <code class="highlighter-rouge">.app</code></p>
+    <p><strong>NOTICE</strong> topology configurations defined here are 
REQUIRED an extra prefix <code>.app</code></p>
 
     <p>Blow are some example configurations for [site=sandbox, 
applicatoin=hbaseSecurityLog].</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code> 
classification.hbase.zookeeper.property.clientPort=2181
+    <pre><code> classification.hbase.zookeeper.property.clientPort=2181
  classification.hbase.zookeeper.quorum=sandbox.hortonworks.com
     
  app.envContextConfig.env=storm
@@ -391,7 +390,6 @@
  app.eagleProps.eagleService.username=admin
  app.eagleProps.eagleService.password=secret
 </code></pre>
-    </div>
 
     <p><img src="/images/appManager/topology-configuration-1.png" 
alt="topology-configuration-1" />
 <img src="/images/appManager/topology-configuration-2.png" 
alt="topology-configuration-2" /></p>

Modified: eagle/site/docs/tutorial/userprofile.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/userprofile.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/tutorial/userprofile.html (original)
+++ eagle/site/docs/tutorial/userprofile.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">User Profile 
Tutorial</h1>
-        <p>This document will introduce how to start the online processing on 
user profiles. Assume Apache Eagle (incubating) has been installed and <a 
href="http://sandbox.hortonworks.com:9099/eagle-service";>Eagle service</a>
+        <p>This document will introduce how to start the online processing on 
user profiles. Assume Apache Eagle has been installed and <a 
href="http://sandbox.hortonworks.com:9099/eagle-service";>Eagle service</a>
 is started.</p>
 
 <h3 id="user-profile-offline-training">User Profile Offline Training</h3>
@@ -234,10 +234,9 @@ is started.</p>
       <li>
         <p>Option 1: command line</p>
 
-        <div class="highlighter-rouge"><pre class="highlight"><code>$ cd 
&lt;eagle-home&gt;/bin
+        <pre><code>$ cd &lt;eagle-home&gt;/bin
 $ bin/eagle-userprofile-scheduler.sh --site sandbox start
 </code></pre>
-        </div>
       </li>
       <li>
         <p>Option 2: start via Apache Ambari
@@ -265,9 +264,8 @@ $ bin/eagle-userprofile-scheduler.sh --s
 
     <p>submit userProfiles topology if it’s not on <a 
href="http://sandbox.hortonworks.com:8744";>topology UI</a></p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>$ 
bin/eagle-topology.sh --main 
org.apache.eagle.security.userprofile.UserProfileDetectionMain --config 
conf/sandbox-userprofile-topology.conf start
+    <pre><code>$ bin/eagle-topology.sh --main 
org.apache.eagle.security.userprofile.UserProfileDetectionMain --config 
conf/sandbox-userprofile-topology.conf start
 </code></pre>
-    </div>
   </li>
   <li>
     <p><strong>Option 2</strong>: Apache Ambari</p>
@@ -282,24 +280,23 @@ $ bin/eagle-userprofile-scheduler.sh --s
   <li>Prepare sample data for ML training and validation sample data
     <ul>
       <li>a. Download following sample data to be used for training</li>
-      <li><a href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code 
class="highlighter-rouge">user1.hdfs-audit.2015-10-11-00.txt</code></a></li>
-      <li><a href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code 
class="highlighter-rouge">user1.hdfs-audit.2015-10-11-01.txt</code></a></li>
-      <li>b. Downlaod <a href="/data/userprofile-validate.txt"><code 
class="highlighter-rouge">userprofile-validate.txt</code></a>file which 
contains data points that you can try to test the models</li>
+      <li><a 
href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code>user1.hdfs-audit.2015-10-11-00.txt</code></a></li>
+      <li><a 
href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code>user1.hdfs-audit.2015-10-11-01.txt</code></a></li>
+      <li>b. Downlaod <a 
href="/data/userprofile-validate.txt"><code>userprofile-validate.txt</code></a>file
 which contains data points that you can try to test the models</li>
     </ul>
   </li>
   <li>Copy the files (downloaded in the previous step) into a location in 
sandbox 
-For example: <code 
class="highlighter-rouge">/usr/hdp/current/eagle/lib/userprofile/data/</code></li>
-  <li>Modify <code 
class="highlighter-rouge">&lt;Eagle-home&gt;/conf/sandbox-userprofile-scheduler.conf
 </code>
-update <code class="highlighter-rouge">training-audit-path</code> to set to 
the path for training data sample (the path you used for Step 1.a)
+For example: <code>/usr/hdp/current/eagle/lib/userprofile/data/</code></li>
+  <li>Modify <code>&lt;Eagle-home&gt;/conf/sandbox-userprofile-scheduler.conf 
</code>
+update <code>training-audit-path</code> to set to the path for training data 
sample (the path you used for Step 1.a)
 update detection-audit-path to set to the path for validation (the path you 
used for Step 1.b)</li>
   <li>Run ML training program from eagle UI</li>
   <li>
     <p>Produce Apache Kafka data using the contents from validate file (Step 
1.b)
-Run the command (assuming the eagle configuration uses Kafka topic <code 
class="highlighter-rouge">sandbox_hdfs_audit_log</code>)</p>
+Run the command (assuming the eagle configuration uses Kafka topic 
<code>sandbox_hdfs_audit_log</code>)</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code> 
./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic 
sandbox_hdfs_audit_log
+    <pre><code> ./kafka-console-producer.sh --broker-list 
sandbox.hortonworks.com:6667 --topic sandbox_hdfs_audit_log
 </code></pre>
-    </div>
   </li>
   <li>Paste few lines of data from file validate onto kafka-console-producer 
 Check <a 
href="http://localhost:9099/eagle-service/#/dam/alertList";>http://localhost:9099/eagle-service/#/dam/alertList</a>
 for generated alerts</li>

Modified: eagle/site/docs/usecases.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/usecases.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/usecases.html (original)
+++ eagle/site/docs/usecases.html Tue Jan  3 01:19:05 2017
@@ -224,7 +224,7 @@
     <p>Data activity represents how user explores data provided by big data 
platforms. Analyzing data activity and alerting for insecure access are 
fundamental requirements for securing enterprise data. As data volume is 
increasing exponentially with Hadoop<sup id="fnref:HADOOP"><a href="#fn:HADOOP" 
class="footnote">1</a></sup>, Hive<sup id="fnref:HIVE"><a href="#fn:HIVE" 
class="footnote">2</a></sup>, Spark<sup id="fnref:SPARK"><a href="#fn:SPARK" 
class="footnote">3</a></sup> technology, understanding data activities for 
every user becomes extremely hard,  let alone to alert for a single malicious 
event in real time among petabytes streaming data per day.</p>
   </li>
   <li>
-    <p>Securing enterprise data starts from understanding data activities for 
every user. Apache Eagle (incubating, called Eagle in the following) has 
integrated with many popular big data platforms e.g. Hadoop, Hive, Spark, 
Cassandra<sup id="fnref:CASSANDRA"><a href="#fn:CASSANDRA" 
class="footnote">4</a></sup> etc. With Eagle user can browse data hierarchy, 
mark sensitive data and then create comprehensive policy to alert for insecure 
data access.</p>
+    <p>Securing enterprise data starts from understanding data activities for 
every user. Apache Eagle (called Eagle in the following) has integrated with 
many popular big data platforms e.g. Hadoop, Hive, Spark, Cassandra<sup 
id="fnref:CASSANDRA"><a href="#fn:CASSANDRA" class="footnote">4</a></sup> etc. 
With Eagle user can browse data hierarchy, mark sensitive data and then create 
comprehensive policy to alert for insecure data access.</p>
   </li>
 </ul>
 

Modified: eagle/site/docs/user-profile-ml.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/user-profile-ml.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/docs/user-profile-ml.html (original)
+++ eagle/site/docs/user-profile-ml.html Tue Jan  3 01:19:05 2017
@@ -217,7 +217,7 @@
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
         <h1 class="page-header" style="margin-top: 0px">User Profile Machine 
Learning</h1>
-        <p>Apache Eagle (incubating, called Eagle in the following) provides 
capabilities to define user activity patterns or user profiles for Apache 
Hadoop users based on the user behavior in the platform. The idea is to provide 
anomaly detection capability without setting hard thresholds in the system. The 
user profiles generated by our system are modeled using machine-learning 
algorithms and used for detection of anomalous user activities, where users’ 
activity pattern differs from their pattern history. Currently Eagle uses two 
algorithms for anomaly detection: Eigen-Value Decomposition and Density 
Estimation. The algorithms read data from HDFS audit logs, slice and dice data, 
and generate models for each user in the system. Once models are generated, 
Eagle uses the Apache Storm framework for near-real-time anomaly detection to 
determine if current user activities are suspicious or not with respect to 
their model. The block diagram below shows the current pipeline for user
  profile training and online detection.</p>
+        <p>Apache Eagle (called Eagle in the following) provides capabilities 
to define user activity patterns or user profiles for Apache Hadoop users based 
on the user behavior in the platform. The idea is to provide anomaly detection 
capability without setting hard thresholds in the system. The user profiles 
generated by our system are modeled using machine-learning algorithms and used 
for detection of anomalous user activities, where users’ activity pattern 
differs from their pattern history. Currently Eagle uses two algorithms for 
anomaly detection: Eigen-Value Decomposition and Density Estimation. The 
algorithms read data from HDFS audit logs, slice and dice data, and generate 
models for each user in the system. Once models are generated, Eagle uses the 
Apache Storm framework for near-real-time anomaly detection to determine if 
current user activities are suspicious or not with respect to their model. The 
block diagram below shows the current pipeline for user profile tra
 ining and online detection.</p>
 
 <p><img src="/images/docs/userprofile-arch.png" alt="" /></p>
 

Modified: eagle/site/feed.xml
URL: 
http://svn.apache.org/viewvc/eagle/site/feed.xml?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/feed.xml (original)
+++ eagle/site/feed.xml Tue Jan  3 01:19:05 2017
@@ -5,9 +5,9 @@
     <description>Eagle - Analyze Big Data Platforms for Security and 
Performance</description>
     <link>http://goeagle.io/</link>
     <atom:link href="http://goeagle.io/feed.xml"; rel="self" 
type="application/rss+xml"/>
-    <pubDate>Mon, 12 Dec 2016 14:58:55 +0800</pubDate>
-    <lastBuildDate>Mon, 12 Dec 2016 14:58:55 +0800</lastBuildDate>
-    <generator>Jekyll v3.1.2</generator>
+    <pubDate>Fri, 30 Dec 2016 10:54:23 +0800</pubDate>
+    <lastBuildDate>Fri, 30 Dec 2016 10:54:23 +0800</lastBuildDate>
+    <generator>Jekyll v2.5.3</generator>
     
       <item>
         <title>Apache Eagle 正式发布:分布式实时Hadoop数据安å…
¨æ–¹æ¡ˆ</title>
@@ -67,7 +67,7 @@
 
 
&lt;p&gt;&lt;strong&gt;以下是一个Eagle如何处理事件和告警的示例:&lt;/strong&gt;&lt;/p&gt;
 
-&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;StormExecutionEnvironment env = 
ExecutionEnvironmentFactory.getStorm(config); // storm env
+&lt;pre&gt;&lt;code&gt;StormExecutionEnvironment env = 
ExecutionEnvironmentFactory.getStorm(config); // storm env
 StreamProducer producer = env.newSource(new 
KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare 
kafka source
        .flatMap(new AuditLogTransformer()) // transform event
        .groupBy(Arrays.asList(0))  // group by 1st field
@@ -75,7 +75,6 @@ StreamProducer producer = env.newSource(
        .alertWithConsumer(“userActivity“,”userProfileExecutor“) // ML 
policy evaluation
 env.execute(); // execute stream processing and alert
 &lt;/code&gt;&lt;/pre&gt;
-&lt;/div&gt;
 
 &lt;p&gt;&lt;strong&gt;告警框架(Alerting 
Framework)Eagle&lt;/strong&gt;告警框架由流å…
ƒæ•°æ®API、策略引擎服务提供API、策略Partitioner API 
以及预警去重框架等组成:&lt;/p&gt;
 
@@ -85,7 +84,7 @@ env.execute(); // execute stream process
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;扩展性&lt;/strong&gt; 
Eagle的策略引擎服务提供API允许你插入新的策略引擎&lt;/p&gt;
 
-    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;  public interface 
PolicyEvaluatorServiceProvider {
+    &lt;pre&gt;&lt;code&gt;  public interface PolicyEvaluatorServiceProvider {
     public String getPolicyType();         // literal string to identify one 
type of policy
     public Class&amp;lt;? extends PolicyEvaluator&amp;gt; 
getPolicyEvaluator(); // get policy evaluator implementation
     public List&amp;lt;Module&amp;gt; getBindingModules();  // policy text 
with json format to object mapping
@@ -96,17 +95,15 @@ env.execute(); // execute stream process
     public void onPolicyDelete(); // invoked when policy is deleted
   }
 &lt;/code&gt;&lt;/pre&gt;
-    &lt;/div&gt;
   &lt;/li&gt;
   &lt;li&gt;&lt;strong&gt;策略Partitioner API&lt;/strong&gt; å…
è®¸ç­–略在不同的物理节点上并行执行。也允许你
自定义策略Partitioner类。这些功能使得策略和事件完å…
¨ä»¥åˆ†å¸ƒå¼çš„æ–¹å¼æ‰§è¡Œã€‚&lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;可伸缩性&lt;/strong&gt; Eagle 
通过支持策略的分区接口来实现大量的策略可伸缩并发地运行&lt;/p&gt;
 
-    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;  public interface PolicyPartitioner 
extends Serializable {
+    &lt;pre&gt;&lt;code&gt;  public interface PolicyPartitioner extends 
Serializable {
     int partition(int numTotalPartitions, String policyType, String policyId); 
// method to distribute policies
   }
 &lt;/code&gt;&lt;/pre&gt;
-    &lt;/div&gt;
 
     &lt;p&gt;&lt;img src=&quot;/images/posts/policy-partition.png&quot; 
alt=&quot;&quot; /&gt;&lt;/p&gt;
 
@@ -163,24 +160,21 @@ Eagle 支持根据用æˆ
   &lt;li&gt;
     
&lt;p&gt;单一事件执行策略(用户访问Hive中的敏感数据列)&lt;/p&gt;
 
-    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;  from 
hiveAccessLogStream[sensitivityType==&#39;PHONE_NUMBER&#39;] select * insert 
into outputStream;
+    &lt;pre&gt;&lt;code&gt;  from 
hiveAccessLogStream[sensitivityType==&#39;PHONE_NUMBER&#39;] select * insert 
into outputStream;
 &lt;/code&gt;&lt;/pre&gt;
-    &lt;/div&gt;
   &lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;基于窗口的策略(用户在10分钟内访问目录 
/tmp/private 多余 5次)&lt;/p&gt;
 
-    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;  hdfsAuditLogEventStream[(src == 
&#39;/tmp/private&#39;)]#window.externalTime(timestamp,10 min) select user, 
count(timestamp) as aggValue group by user having aggValue &amp;gt;= 5 insert 
into outputStream;
+    &lt;pre&gt;&lt;code&gt;  hdfsAuditLogEventStream[(src == 
&#39;/tmp/private&#39;)]#window.externalTime(timestamp,10 min) select user, 
count(timestamp) as aggValue group by user having aggValue &amp;gt;= 5 insert 
into outputStream;
 &lt;/code&gt;&lt;/pre&gt;
-    &lt;/div&gt;
   &lt;/li&gt;
 &lt;/ul&gt;
 
 &lt;p&gt;&lt;strong&gt;查询服务(Query Service)&lt;/strong&gt; Eagle 
提供类SQL的REST 
API用来实现针对海量数据集的综合计算、查询和分析的能力,支持例如过滤、聚合、直方运算、排序、top、算术表达式以及分页等。Eagle优å
…ˆæ”¯æŒHBase 作为其默认数据存储,但是同时也支持基JDBC的å…
³ç³»åž‹æ•°æ®åº“。特别是当选择以HBase作为存储时,Eagle便原生拥有了HBase存储和查询海量监控数据的能力,Eagle
 查询框架会将用户提供的类SQL查询语法最ç»
 ˆç¼–译成为HBase 原生的Filter 对象,并支持通过HBase 
Coprocessor进一步提升响应速度。&lt;/p&gt;
 
-&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;query=AlertDefinitionService[@dataSource=&quot;hiveQueryLog&quot;]{@policyDef}&amp;amp;pageSize=100000
+&lt;pre&gt;&lt;code&gt;query=AlertDefinitionService[@dataSource=&quot;hiveQueryLog&quot;]{@policyDef}&amp;amp;pageSize=100000
 &lt;/code&gt;&lt;/pre&gt;
-&lt;/div&gt;
 
 &lt;h2 id=&quot;eagleebay&quot;&gt;Eagle在eBay的使用场景&lt;/h2&gt;
 
&lt;p&gt;目前,Eagle的数据行为监控系统已经部署到一个拥有2500多个节点的Hadoop集群之上,用以保护数百PB数据的安å
…¨ï¼Œå¹¶æ­£è®¡åˆ’于今年年底之前扩展到å…
¶ä»–上十个Hadoop集群上,从而覆盖eBay 
所有主要Hadoop的10000多台节点。在我们的生产环境中,我们已针对HDFS、Hive
 等集群中的数据配置了一些基础的安å…
¨ç­–略,并将于年底之前不断引å…
¥æ›´å¤šçš„策略,以确保重要数据的绝对安å…
¨ã€‚目前,Eagle的策略涵盖多ç§�
 �模式,包
括从访问模式、频繁访问数据集,预定义查询类型、Hive 
表和列、HBase 表以及基于机器学习模型生成的用户Profile相å…
³çš„æ‰€æœ‰ç­–略等。同时,我们也有广泛的策略来防止数据的丢失、数据被拷贝到不安å
…
¨åœ°ç‚¹ã€æ•æ„Ÿæ•°æ®è¢«æœªæŽˆæƒåŒºåŸŸè®¿é—®ç­‰ã€‚Eagle策略定义上极大的灵活性和扩展性使得我们未来可以轻易地继续扩展更多更复杂的策略以支持更多多å
…ƒåŒ–的用例场景。&lt;/p&gt;

Modified: eagle/site/index.html
URL: 
http://svn.apache.org/viewvc/eagle/site/index.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/index.html (original)
+++ eagle/site/index.html Tue Jan  3 01:19:05 2017
@@ -59,7 +59,7 @@
                 <li><a class="menu" href="#about_page">ABOUT</a></li>
                 <li><a class="menu" href="#diagram_page">ARCHITECTURE</a></li>
                 <li><a class="menu" href="#community_page">COMMUNITY</a></li>
-                <li><a class="menu" 
href="https://github.com/apache/incubator-eagle"; target="_blank" 
title="Github">GITHUB</a></li>
+                <li><a class="menu" href="https://github.com/apache/eagle"; 
target="_blank" title="Github">GITHUB</a></li>
               </ul>
             </div>
             <!-- /.navbar-collapse --> 
@@ -83,7 +83,7 @@
     <div class="homewrapper">
       <div class="hometitle"> <img src="images/feather.png" height="60px"> 
</div>
       <div class="hometext">
-        <h2 style="font-weight:500;">Apache Eagle (incubating)</h2>
+        <h2 style="font-weight:500;">Apache Eagle</h2>
         <h3>Analyze Big Data Platforms For Security and Performance</h3>
      </div>
     </div>
@@ -102,19 +102,21 @@
 <div class="workwrapper" id="about_page">
   <div class="container">
     <div class="row">
-      <h2 class="sectiontile">ABOUT APACHE EAGLE (incubating)</h2>
+      <h2 class="sectiontile">ABOUT APACHE EAGLE</h2>
       <div class="col-md-12">
-        <p style="width:80%; margin-left:auto; margin-right:auto;"> Apache 
Eagle (incubating, called Eagle in the following) is an open source analytics 
solution for identifying security and performance issues instantly on big data 
platforms, e.g. Apache Hadoop, Apache Spark etc. It analyzes data activities, 
yarn applications, jmx metrics, and daemon logs etc., provides state-of-the-art 
alert engine to identify security breach, performance issues and shows 
insights. </p>
+        <p style="width:80%; margin-left:auto; margin-right:auto;"> Apache 
Eagle (called Eagle in the following) is an open source analytics solution for 
identifying security and performance issues instantly on big data platforms, 
e.g. Apache Hadoop, Apache Spark etc. It analyzes data activities, yarn 
applications, jmx metrics, and daemon logs etc., provides state-of-the-art 
alert engine to identify security breach, performance issues and shows 
insights. </p>
         <br/>
         <p style="width:80%; margin-left:auto; margin-right:auto;"> Big data 
platform normally generates huge amount of operational logs and metrics in 
realtime. Eagle is founded to solve hard problems in securing and tuning 
performance for big data platforms by ensuring metrics, logs always available 
and alerting immediately even under huge traffic.</p>
-        <div class="sepline"></div>
         <!-- 
+        <div class="sepline"></div>
         <P>Eagle has been accepted as an Apache Incubator Project on Oct 26, 
2015.</P>
          -->
+        <!-- 
         <div style="padding: 25px;">
           <a href="http://incubator.apache.org/";><img 
src="/images/apache-incubator-logo-small.png"></a><br /><br />
           <span>Apache Eagle is an effort undergoing incubation at The Apache 
Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is 
required of all newly accepted projects until a further review indicates that 
the infrastructure, communications, and decision making process have stabilized 
in a manner consistent with other successful ASF projects. While incubation 
status is not necessarily a reflection of the completeness or stability of the 
code, it does indicate that the project has yet to be fully endorsed by the 
ASF.</span>
         </div>
+         -->
         <div class="sepline"></div>
         <p>Eagle analyzes big data platforms and reports issues in 3 steps:</p>
       </div>
@@ -234,8 +236,8 @@
             <p>Learn latest updates about Eagle through:</p>
         <div class="row">
           <div class="col-md-6">
-<iframe 
src="https://ghbtns.com/github-btn.html?user=apache&repo=incubator-eagle&type=star&count=true";
 frameborder="0" scrolling="0" width="150px" height="20px"></iframe>
-                <iframe 
src="https://ghbtns.com/github-btn.html?user=apache&repo=incubator-eagle&type=fork&count=true";
 frameborder="0" scrolling="0" width="150px" height="20px"></iframe>
+<iframe 
src="https://ghbtns.com/github-btn.html?user=apache&repo=eagle&type=star&count=true";
 frameborder="0" scrolling="0" width="150px" height="20px"></iframe>
+                <iframe 
src="https://ghbtns.com/github-btn.html?user=apache&repo=eagle&type=fork&count=true";
 frameborder="0" scrolling="0" width="150px" height="20px"></iframe>
 <br/>
 
 <a href="https://twitter.com/TheApacheEagle"; class="twitter-follow-button" 
data-show-count="false">Follow @TheApacheEagle</a>
@@ -280,7 +282,7 @@
 Copyright © 2015 <a href="http://www.apache.org";>The Apache Software 
Foundation</a>, Licensed under the <a 
href="http://www.apache.org/licenses/LICENSE-2.0";>Apache License, Version 
2.0</a>.
 </div>
 <div>
-Apache Eagle, Eagle, Apache, the Apache feather logo, and the Apache Incubator 
project logo are trademarks of The Apache Software Foundation.
+Apache Eagle, Eagle, Apache, and the Apache feather logo are trademarks of The 
Apache Software Foundation.
 </div>
 </div></div>
     </div>

Modified: eagle/site/post/2015/10/27/apache-eagle-announce-cn.html
URL: 
http://svn.apache.org/viewvc/eagle/site/post/2015/10/27/apache-eagle-announce-cn.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/post/2015/10/27/apache-eagle-announce-cn.html (original)
+++ eagle/site/post/2015/10/27/apache-eagle-announce-cn.html Tue Jan  3 
01:19:05 2017
@@ -143,7 +143,7 @@
 
 
<p><strong>以下是一个Eagle如何处理事件和告警的示例:</strong></p>
 
-<div class="highlighter-rouge"><pre 
class="highlight"><code>StormExecutionEnvironment env = 
ExecutionEnvironmentFactory.getStorm(config); // storm env
+<pre><code>StormExecutionEnvironment env = 
ExecutionEnvironmentFactory.getStorm(config); // storm env
 StreamProducer producer = env.newSource(new 
KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare 
kafka source
        .flatMap(new AuditLogTransformer()) // transform event
        .groupBy(Arrays.asList(0))  // group by 1st field
@@ -151,7 +151,6 @@ StreamProducer producer = env.newSource(
        .alertWithConsumer(“userActivity“,”userProfileExecutor“) // ML 
policy evaluation
 env.execute(); // execute stream processing and alert
 </code></pre>
-</div>
 
 <p><strong>告警框架(Alerting 
Framework)Eagle</strong>告警框架由流å…
ƒæ•°æ®API、策略引擎服务提供API、策略Partitioner API 
以及预警去重框架等组成:</p>
 
@@ -161,7 +160,7 @@ env.execute(); // execute stream process
   <li>
     <p><strong>扩展性</strong> Eagle的策略引擎服务提供API允许你
插入新的策略引擎</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>  public 
interface PolicyEvaluatorServiceProvider {
+    <pre><code>  public interface PolicyEvaluatorServiceProvider {
     public String getPolicyType();         // literal string to identify one 
type of policy
     public Class&lt;? extends PolicyEvaluator&gt; getPolicyEvaluator(); // get 
policy evaluator implementation
     public List&lt;Module&gt; getBindingModules();  // policy text with json 
format to object mapping
@@ -172,17 +171,15 @@ env.execute(); // execute stream process
     public void onPolicyDelete(); // invoked when policy is deleted
   }
 </code></pre>
-    </div>
   </li>
   <li><strong>策略Partitioner API</strong> å…
è®¸ç­–略在不同的物理节点上并行执行。也允许你
自定义策略Partitioner类。这些功能使得策略和事件完å…
¨ä»¥åˆ†å¸ƒå¼çš„æ–¹å¼æ‰§è¡Œã€‚</li>
   <li>
     <p><strong>可伸缩性</strong> Eagle 
通过支持策略的分区接口来实现大量的策略可伸缩并发地运行</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>  public 
interface PolicyPartitioner extends Serializable {
+    <pre><code>  public interface PolicyPartitioner extends Serializable {
     int partition(int numTotalPartitions, String policyType, String policyId); 
// method to distribute policies
   }
 </code></pre>
-    </div>
 
     <p><img src="/images/posts/policy-partition.png" alt="" /></p>
 
@@ -239,24 +236,21 @@ Eagle 支持根据用æˆ
   <li>
     <p>单一事件执行策略(用户访问Hive中的敏感数据列)</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>  from 
hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into 
outputStream;
+    <pre><code>  from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] 
select * insert into outputStream;
 </code></pre>
-    </div>
   </li>
   <li>
     <p>基于窗口的策略(用户在10分钟内访问目录 /tmp/private 
多余 5次)</p>
 
-    <div class="highlighter-rouge"><pre class="highlight"><code>  
hdfsAuditLogEventStream[(src == 
'/tmp/private')]#window.externalTime(timestamp,10 min) select user, 
count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into 
outputStream;
+    <pre><code>  hdfsAuditLogEventStream[(src == 
'/tmp/private')]#window.externalTime(timestamp,10 min) select user, 
count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into 
outputStream;
 </code></pre>
-    </div>
   </li>
 </ul>
 
 <p><strong>查询服务(Query Service)</strong> Eagle 提供类SQL的REST 
API用来实现针对海量数据集的综合计算、查询和分析的能力,支持例如过滤、聚合、直方运算、排序、top、算术表达式以及分页等。Eagle优å
…ˆæ”¯æŒHBase 作为其默认数据存储,但是同时也支持基JDBC的å…
³ç³»åž‹æ•°æ®åº“。特别是当选择以HBase作为存储时,Eagle便原生拥有了HBase存储和查询海量监控数据的能力,Eagle
 查询框架会将用户提供的类SQL查询语法最终编译æˆ
 ä¸ºHBase 原生的Filter 对象,并支持通过HBase 
Coprocessor进一步提升响应速度。</p>
 
-<div class="highlighter-rouge"><pre 
class="highlight"><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000
+<pre><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000
 </code></pre>
-</div>
 
 <h2 id="eagleebay">Eagle在eBay的使用场景</h2>
 
<p>目前,Eagle的数据行为监控系统已经部署到一个拥有2500多个节点的Hadoop集群之上,用以保护数百PB数据的安å
…¨ï¼Œå¹¶æ­£è®¡åˆ’于今年年底之前扩展到å…
¶ä»–上十个Hadoop集群上,从而覆盖eBay 
所有主要Hadoop的10000多台节点。在我们的生产环境中,我们已针对HDFS、Hive
 等集群中的数据配置了一些基础的安å…
¨ç­–略,并将于年底之前不断引å…
¥æ›´å¤šçš„策略,以确保重要数据的绝对安å…
¨ã€‚目前,Eagle的策略涵盖多种æ¨�
 �式,包
括从访问模式、频繁访问数据集,预定义查询类型、Hive 
表和列、HBase 表以及基于机器学习模型生成的用户Profile相å…
³çš„æ‰€æœ‰ç­–略等。同时,我们也有广泛的策略来防止数据的丢失、数据被拷贝到不安å
…
¨åœ°ç‚¹ã€æ•æ„Ÿæ•°æ®è¢«æœªæŽˆæƒåŒºåŸŸè®¿é—®ç­‰ã€‚Eagle策略定义上极大的灵活性和扩展性使得我们未来可以轻易地继续扩展更多更复杂的策略以支持更多多å
…ƒåŒ–的用例场景。</p>

Modified: eagle/site/sup/index.html
URL: 
http://svn.apache.org/viewvc/eagle/site/sup/index.html?rev=1777047&r1=1777046&r2=1777047&view=diff
==============================================================================
--- eagle/site/sup/index.html (original)
+++ eagle/site/sup/index.html Tue Jan  3 01:19:05 2017
@@ -3,7 +3,7 @@
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge">
 
-       <title>Eagle - Apache Eagle (incubating) Security</title>
+       <title>Eagle - Apache Eagle Security</title>
        <meta name="description" content="Eagle - Analyze Big Data Platforms 
for Security and Performance">
 
        <meta name="keywords" content="Eagle, Hadoop, Security, Real Time">
@@ -128,8 +128,8 @@
         </ul>
       </div>
       <div class="col-xs-6 col-sm-9 page-main-content" style="margin-left: 
-15px" id="loadcontent">
-        <h1 class="page-header" style="margin-top: 0px">Apache Eagle 
(incubating) Security</h1>
-        <p>The Apache Software Foundation takes a very active stance in 
eliminating security problems in its software products. Apache Eagle 
(incubating) is also responsive to such issues around its features.</p>
+        <h1 class="page-header" style="margin-top: 0px">Apache Eagle 
Security</h1>
+        <p>The Apache Software Foundation takes a very active stance in 
eliminating security problems in its software products. Apache Eagle is also 
responsive to such issues around its features.</p>
 
 <p>If you have any concern regarding to Eagle’s Security or you believe a 
vulnerability is discovered, don’t hesitate to get connected with Aapche 
Security Team by sending emails to <a 
href="&#109;&#097;&#105;&#108;&#116;&#111;:&#115;&#101;&#099;&#117;&#114;&#105;&#116;&#121;&#064;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#115;&#101;&#099;&#117;&#114;&#105;&#116;&#121;&#064;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a>.
 In the message, you can indicate the project name is Eagle, provide a 
description of the issue, and you are recommended to give the way of 
reproducing it. The security team and eagle community will get back to you 
after assessing the findings.</p>
 


Reply via email to