Modified: eagle/site/docs/tutorial/classification.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/classification.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/tutorial/classification.html (original)
+++ eagle/site/docs/tutorial/classification.html Thu Jan 12 07:44:47 2017
@@ -241,30 +241,33 @@ Currently this feature is available ONLY
 
         <p>You may configure the default path for Apache Hadoop clients to 
connect remote hdfs namenode.</p>
 
-        <pre><code>  
classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020
+        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020
 </code></pre>
+        </div>
       </li>
       <li>
         <p>HA case</p>
 
         <p>Basically, you point your fs.defaultFS at your nameservice and let 
the client know how its configured (the backing namenodes) and how to fail over 
between them under the HA mode</p>
 
-        <pre><code>  classification.fs.defaultFS=hdfs://nameservice1
+        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.fs.defaultFS=hdfs://nameservice1
   classification.dfs.nameservices=nameservice1
   classification.dfs.ha.namenodes.nameservice1=namenode1,namenode2
   
classification.dfs.namenode.rpc-address.nameservice1.namenode1=hadoopnamenode01:8020
   
classification.dfs.namenode.rpc-address.nameservice1.namenode2=hadoopnamenode02:8020
   
classification.dfs.client.failover.proxy.provider.nameservice1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>For Kerberos-secured cluster, you need to get a keytab file and the 
principal from your admin, and configure “eagle.keytab.file” and 
“eagle.kerberos.principal” to authenticate its access.</p>
 
-        <pre><code>  
classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
+        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
   [email protected]
 </code></pre>
+        </div>
 
         <p>If there is an exception about “invalid server principal name”, 
you may need to check the DNS resolver, or the data transfer , such as 
“dfs.encrypt.data.transfer”, “dfs.encrypt.data.transfer.algorithm”, 
“dfs.trustedchannel.resolver.class”, 
“dfs.datatransfer.client.encrypt”.</p>
       </li>
@@ -275,12 +278,13 @@ Currently this feature is available ONLY
       <li>
         <p>Basic</p>
 
-        <pre><code>  classification.accessType=metastoredb_jdbc
+        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.accessType=metastoredb_jdbc
   classification.password=hive
   classification.user=hive
   classification.jdbcDriverClassName=com.mysql.jdbc.Driver
   
classification.jdbcUrl=jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>
@@ -293,16 +297,17 @@ Currently this feature is available ONLY
 
         <p>You need to sett “hbase.zookeeper.quorum”:”localhost” 
property and “hbase.zookeeper.property.clientPort” property.</p>
 
-        <pre><code>  classification.hbase.zookeeper.property.clientPort=2181
+        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.hbase.zookeeper.property.clientPort=2181
   classification.hbase.zookeeper.quorum=localhost
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>According to your environment, you can add or remove some of the 
following properties. Here is the reference.</p>
 
-        <pre><code>  classification.hbase.zookeeper.property.clientPort=2181
+        <div class="highlighter-rouge"><pre class="highlight"><code>  
classification.hbase.zookeeper.property.clientPort=2181
   classification.hbase.zookeeper.quorum=localhost
   classification.hbase.security.authentication=kerberos
   classification.hbase.master.kerberos.principal=hadoop/[email protected]
@@ -310,6 +315,7 @@ Currently this feature is available ONLY
   classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab
   [email protected]
 </code></pre>
+        </div>
       </li>
     </ul>
   </li>

Modified: eagle/site/docs/tutorial/ldap.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/ldap.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/tutorial/ldap.html (original)
+++ eagle/site/docs/tutorial/ldap.html Thu Jan 12 07:44:47 2017
@@ -219,7 +219,7 @@
 
 <p>Step 1: edit configuration under conf/ldap.properties.</p>
 
-<pre><code>ldap.server=ldap://localhost:10389
+<div class="highlighter-rouge"><pre 
class="highlight"><code>ldap.server=ldap://localhost:10389
 ldap.username=uid=admin,ou=system
 ldap.password=secret
 ldap.user.searchBase=ou=Users,o=mojo
@@ -228,12 +228,13 @@ ldap.user.groupSearchBase=ou=groups,o=mo
 acl.adminRole=
 acl.defaultRole=ROLE_USER
 </code></pre>
+</div>
 
 <p>acl.adminRole and acl.defaultRole are two customized properties for Eagle. 
Eagle manages admin users with groups. If you set acl.adminRole as 
ROLE_{EAGLE-ADMIN-GROUP-NAME}, members in this group have the admin privilege. 
acl.defaultRole is ROLE_USER.</p>
 
 <p>Step 2: edit conf/eagle-service.conf, and add 
springActiveProfile=”default”</p>
 
-<pre><code>eagle{
+<div class="highlighter-rouge"><pre class="highlight"><code>eagle{
     service{
         storage-type="hbase"
         hbase-zookeeper-quorum="localhost"
@@ -243,6 +244,7 @@ acl.defaultRole=ROLE_USER
     }
 }
 </code></pre>
+</div>
 
 
       </div><!--end of loadcontent-->  

Modified: eagle/site/docs/tutorial/notificationplugin.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/notificationplugin.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/tutorial/notificationplugin.html (original)
+++ eagle/site/docs/tutorial/notificationplugin.html Thu Jan 12 07:44:47 2017
@@ -242,12 +242,12 @@
   </li>
 </ul>
 
-<p><img src="/images/notificationPlugin.png" alt="notificationPlugin" />
-### Customized Notification Plugin</p>
+<p><img src="/images/notificationPlugin.png" alt="notificationPlugin" /></p>
+<h3 id="customized-notification-plugin">Customized Notification Plugin</h3>
 
 <p>To integrate a customized notification plugin, we must implement an 
interface</p>
 
-<pre><code>public interface NotificationPlugin {
+<div class="highlighter-rouge"><pre class="highlight"><code>public interface 
NotificationPlugin {
 /**
  * for initialization
  * @throws Exception
@@ -277,24 +277,26 @@ void onAlert(AlertAPIEntity alertEntity)
 List&lt;NotificationStatus&gt; getStatusList();
 } Examples: AlertKafkaPlugin, AlertEmailPlugin, and AlertEagleStorePlugin.
 </code></pre>
+</div>
 
 <p>The second and crucial step is to register the configurations of the 
customized plugin. In other words, we need persist the configuration template 
into database in order to expose the configurations to users in the front 
end.</p>
 
 <p>Examples:</p>
 
-<pre><code>{
-   "prefix": "alertNotifications",
-   "tags": {
-     "notificationType": "kafka"
-   },
-   "className": "org.apache.eagle.notification.plugin.AlertKafkaPlugin",
-   "description": "send alert to kafka bus",
-   "enabled":true,
-   "fields": 
"[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]"
-}
-</code></pre>
+<div class="highlighter-rouge"><pre class="highlight"><code><span 
class="p">{</span><span class="w">
+   </span><span class="nt">"prefix"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"alertNotifications"</span><span 
class="p">,</span><span class="w">
+   </span><span class="nt">"tags"</span><span class="p">:</span><span 
class="w"> </span><span class="p">{</span><span class="w">
+     </span><span class="nt">"notificationType"</span><span 
class="p">:</span><span class="w"> </span><span class="s2">"kafka"</span><span 
class="w">
+   </span><span class="p">},</span><span class="w">
+   </span><span class="nt">"className"</span><span class="p">:</span><span 
class="w"> </span><span 
class="s2">"org.apache.eagle.notification.plugin.AlertKafkaPlugin"</span><span 
class="p">,</span><span class="w">
+   </span><span class="nt">"description"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"send alert to kafka bus"</span><span 
class="p">,</span><span class="w">
+   </span><span class="nt">"enabled"</span><span class="p">:</span><span 
class="kc">true</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"fields"</span><span class="p">:</span><span 
class="w"> </span><span 
class="s2">"[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]"</span><span
 class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre>
+</div>
 
-<p><strong>Note</strong>: <code>fields</code> is the configuration for 
notification type <code>kafka</code></p>
+<p><strong>Note</strong>: <code class="highlighter-rouge">fields</code> is the 
configuration for notification type <code 
class="highlighter-rouge">kafka</code></p>
 
 <p>How can we do that? <a 
href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/bin/eagle-topology-init.sh";>Here</a>
 are Eagle other notification plugin configurations. Just append yours to it, 
and run this script when Eagle service is up.</p>
 

Modified: eagle/site/docs/tutorial/policy.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/policy.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/tutorial/policy.html (original)
+++ eagle/site/docs/tutorial/policy.html Thu Jan 12 07:44:47 2017
@@ -242,12 +242,13 @@
   <li>
     <p><strong>Step 2</strong>: Eagle supports a variety of properties for 
match critera where users can set different values. Eagle also supports window 
functions to extend policies with time functions.</p>
 
-    <pre><code>command = delete 
+    <div class="highlighter-rouge"><pre class="highlight"><code>command = 
delete 
 (Eagle currently supports the following commands open, delete, copy, append, 
copy from local, get, move, mkdir, create, list, change permissions)
        
 source = /tmp/private 
 (Eagle supports wildcarding for property values for example /tmp/*)
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hdfs-policy2.png" alt="HDFS Policies" /></p>
   </li>
@@ -274,12 +275,13 @@ source = /tmp/private
   <li>
     <p><strong>Step 2</strong>: Eagle support a variety of properties for 
match critera where users can set different values. Eagle also supports window 
functions to extend policies with time functions.</p>
 
-    <pre><code>command = Select 
+    <div class="highlighter-rouge"><pre class="highlight"><code>command = 
Select 
 (Eagle currently supports the following commands DDL statements Create, Drop, 
Alter, Truncate, Show)
        
 sensitivity type = PHONE_NUMBER
 (Eagle supports classifying data in Hive with different sensitivity types. 
Users can use these sensitivity types to create policies)
 </code></pre>
+    </div>
 
     <p><img src="/images/docs/hive-policy2.png" alt="Hive Policies" /></p>
   </li>

Modified: eagle/site/docs/tutorial/site-0.3.0.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/site-0.3.0.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/tutorial/site-0.3.0.html (original)
+++ eagle/site/docs/tutorial/site-0.3.0.html Thu Jan 12 07:44:47 2017
@@ -239,32 +239,35 @@ Here we give configuration examples for
 
         <p>You may configure the default path for Hadoop clients to connect 
remote hdfs namenode.</p>
 
-        <pre><code>  {"fs.defaultFS":"hdfs://sandbox.hortonworks.com:8020"}
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span 
class="nt">"fs.defaultFS"</span><span class="p">:</span><span 
class="s2">"hdfs://sandbox.hortonworks.com:8020"</span><span 
class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
       <li>
         <p>HA case</p>
 
         <p>Basically, you point your fs.defaultFS at your nameservice and let 
the client know how its configured (the backing namenodes) and how to fail over 
between them under the HA mode</p>
 
-        <pre><code>  {"fs.defaultFS":"hdfs://nameservice1",
-   "dfs.nameservices": "nameservice1",
-   "dfs.ha.namenodes.nameservice1":"namenode1,namenode2",
-   "dfs.namenode.rpc-address.nameservice1.namenode1": "hadoopnamenode01:8020",
-   "dfs.namenode.rpc-address.nameservice1.namenode2": "hadoopnamenode02:8020",
-   "dfs.client.failover.proxy.provider.nameservice1": 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span 
class="nt">"fs.defaultFS"</span><span class="p">:</span><span 
class="s2">"hdfs://nameservice1"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"dfs.nameservices"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"nameservice1"</span><span class="p">,</span><span class="w">
+   </span><span class="nt">"dfs.ha.namenodes.nameservice1"</span><span 
class="p">:</span><span class="s2">"namenode1,namenode2"</span><span 
class="p">,</span><span class="w">
+   </span><span 
class="nt">"dfs.namenode.rpc-address.nameservice1.namenode1"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"hadoopnamenode01:8020"</span><span class="p">,</span><span 
class="w">
+   </span><span 
class="nt">"dfs.namenode.rpc-address.nameservice1.namenode2"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"hadoopnamenode02:8020"</span><span class="p">,</span><span 
class="w">
+   </span><span 
class="nt">"dfs.client.failover.proxy.provider.nameservice1"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"</span><span
 class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>For Kerberos-secured cluster, you need to get a keytab file and the 
principal from your admin, and configure “eagle.keytab.file” and 
“eagle.kerberos.principal” to authenticate its access.</p>
 
-        <pre><code>  { "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab",
-    "eagle.kerberos.principal":"[email protected]"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span class="w"> </span><span 
class="nt">"eagle.keytab.file"</span><span class="p">:</span><span 
class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span 
class="p">,</span><span class="w">
+    </span><span class="nt">"eagle.kerberos.principal"</span><span 
class="p">:</span><span class="s2">"[email protected]"</span><span class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
 
         <p>If there is an exception about “invalid server principal name”, 
you may need to check the DNS resolver, or the data transfer , such as 
“dfs.encrypt.data.transfer”, “dfs.encrypt.data.transfer.algorithm”, 
“dfs.trustedchannel.resolver.class”, 
“dfs.datatransfer.client.encrypt”.</p>
       </li>
@@ -275,14 +278,15 @@ Here we give configuration examples for
       <li>
         <p>Basic</p>
 
-        <pre><code>  {
-    "accessType": "metastoredb_jdbc",
-    "password": "hive",
-    "user": "hive",
-    "jdbcDriverClassName": "com.mysql.jdbc.Driver",
-    "jdbcUrl": 
"jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span class="w">
+    </span><span class="nt">"accessType"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"metastoredb_jdbc"</span><span 
class="p">,</span><span class="w">
+    </span><span class="nt">"password"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span 
class="w">
+    </span><span class="nt">"user"</span><span class="p">:</span><span 
class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span 
class="w">
+    </span><span class="nt">"jdbcDriverClassName"</span><span 
class="p">:</span><span class="w"> </span><span 
class="s2">"com.mysql.jdbc.Driver"</span><span class="p">,</span><span 
class="w">
+    </span><span class="nt">"jdbcUrl"</span><span class="p">:</span><span 
class="w"> </span><span 
class="s2">"jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true"</span><span
 class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
     </ul>
   </li>
@@ -295,27 +299,29 @@ Here we give configuration examples for
 
         <p>You need to sett “hbase.zookeeper.quorum”:”localhost” 
property and “hbase.zookeeper.property.clientPort” property.</p>
 
-        <pre><code>  {
-      "hbase.zookeeper.property.clientPort":"2181",
-      "hbase.zookeeper.quorum":"localhost"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span class="w">
+      </span><span 
class="nt">"hbase.zookeeper.property.clientPort"</span><span 
class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span 
class="w">
+      </span><span class="nt">"hbase.zookeeper.quorum"</span><span 
class="p">:</span><span class="s2">"localhost"</span><span class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
       <li>
         <p>Kerberos-secured cluster</p>
 
         <p>According to your environment, you can add or remove some of the 
following properties. Here is the reference.</p>
 
-        <pre><code>  {
-      "hbase.zookeeper.property.clientPort":"2181",
-      "hbase.zookeeper.quorum":"localhost",
-      "hbase.security.authentication":"kerberos",
-      "hbase.master.kerberos.principal":"hadoop/[email protected]",
-      "zookeeper.znode.parent":"/hbase",
-      "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab",
-      "eagle.kerberos.principal":"[email protected]"
-  }
-</code></pre>
+        <div class="highlighter-rouge"><pre class="highlight"><code><span 
class="w">  </span><span class="p">{</span><span class="w">
+      </span><span 
class="nt">"hbase.zookeeper.property.clientPort"</span><span 
class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span 
class="w">
+      </span><span class="nt">"hbase.zookeeper.quorum"</span><span 
class="p">:</span><span class="s2">"localhost"</span><span 
class="p">,</span><span class="w">
+      </span><span class="nt">"hbase.security.authentication"</span><span 
class="p">:</span><span class="s2">"kerberos"</span><span 
class="p">,</span><span class="w">
+      </span><span class="nt">"hbase.master.kerberos.principal"</span><span 
class="p">:</span><span class="s2">"hadoop/[email protected]"</span><span 
class="p">,</span><span class="w">
+      </span><span class="nt">"zookeeper.znode.parent"</span><span 
class="p">:</span><span class="s2">"/hbase"</span><span class="p">,</span><span 
class="w">
+      </span><span class="nt">"eagle.keytab.file"</span><span 
class="p">:</span><span 
class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span 
class="p">,</span><span class="w">
+      </span><span class="nt">"eagle.kerberos.principal"</span><span 
class="p">:</span><span class="s2">"[email protected]"</span><span class="w">
+  </span><span class="p">}</span><span class="w">
+</span></code></pre>
+        </div>
       </li>
     </ul>
   </li>

Modified: eagle/site/docs/tutorial/topologymanagement.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/topologymanagement.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/tutorial/topologymanagement.html (original)
+++ eagle/site/docs/tutorial/topologymanagement.html Thu Jan 12 07:44:47 2017
@@ -227,7 +227,7 @@
 <p>Application manager consists of a daemon scheduler and an execution module. 
The scheduler periodically loads user operations(start/stop) from database, and 
the execution module executes these operations. For more details, please refer 
to <a 
href="https://cwiki.apache.org/confluence/display/EAG/Application+Management";>here</a>.</p>
 
 <h3 id="configurations">Configurations</h3>
-<p>The configuration file <code>eagle-scheduler.conf</code> defines scheduler 
parameters, execution platform settings and parts of default topology 
configuration.</p>
+<p>The configuration file <code 
class="highlighter-rouge">eagle-scheduler.conf</code> defines scheduler 
parameters, execution platform settings and parts of default topology 
configuration.</p>
 
 <ul>
   <li>
@@ -321,7 +321,7 @@
   <li>
     <p>Editing eagle-scheduler.conf, and start Eagle service</p>
 
-    <pre><code> # enable application manager       
+    <div class="highlighter-rouge"><pre class="highlight"><code> # enable 
application manager       
  appCommandLoaderEnabled = true
     
  # provide jar path
@@ -331,9 +331,10 @@
  envContextConfig.url = "http://sandbox.hortonworks.com:8744";
  envContextConfig.nimbusHost = "sandbox.hortonworks.com"
 </code></pre>
+    </div>
 
     <p>For more configurations, please back to <a 
href="/docs/configuration.html">Application Configuration</a>. <br />
- After the configuration is ready, start Eagle service 
<code>bin/eagle-service.sh start</code>.</p>
+ After the configuration is ready, start Eagle service <code 
class="highlighter-rouge">bin/eagle-service.sh start</code>.</p>
   </li>
   <li>
     <p>Go to admin page 
@@ -355,11 +356,11 @@
   <li>
     <p>Go to site page, and add topology configurations.</p>
 
-    <p><strong>NOTICE</strong> topology configurations defined here are 
REQUIRED an extra prefix <code>.app</code></p>
+    <p><strong>NOTICE</strong> topology configurations defined here are 
REQUIRED an extra prefix <code class="highlighter-rouge">.app</code></p>
 
     <p>Blow are some example configurations for [site=sandbox, 
applicatoin=hbaseSecurityLog].</p>
 
-    <pre><code> classification.hbase.zookeeper.property.clientPort=2181
+    <div class="highlighter-rouge"><pre class="highlight"><code> 
classification.hbase.zookeeper.property.clientPort=2181
  classification.hbase.zookeeper.quorum=sandbox.hortonworks.com
     
  app.envContextConfig.env=storm
@@ -388,6 +389,7 @@
  app.eagleProps.eagleService.username=admin
  app.eagleProps.eagleService.password=secret
 </code></pre>
+    </div>
 
     <p><img src="/images/appManager/topology-configuration-1.png" 
alt="topology-configuration-1" />
 <img src="/images/appManager/topology-configuration-2.png" 
alt="topology-configuration-2" /></p>

Modified: eagle/site/docs/tutorial/userprofile.html
URL: 
http://svn.apache.org/viewvc/eagle/site/docs/tutorial/userprofile.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/docs/tutorial/userprofile.html (original)
+++ eagle/site/docs/tutorial/userprofile.html Thu Jan 12 07:44:47 2017
@@ -232,9 +232,10 @@ is started.</p>
       <li>
         <p>Option 1: command line</p>
 
-        <pre><code>$ cd &lt;eagle-home&gt;/bin
+        <div class="highlighter-rouge"><pre class="highlight"><code>$ cd 
&lt;eagle-home&gt;/bin
 $ bin/eagle-userprofile-scheduler.sh --site sandbox start
 </code></pre>
+        </div>
       </li>
       <li>
         <p>Option 2: start via Apache Ambari
@@ -262,8 +263,9 @@ $ bin/eagle-userprofile-scheduler.sh --s
 
     <p>submit userProfiles topology if it’s not on <a 
href="http://sandbox.hortonworks.com:8744";>topology UI</a></p>
 
-    <pre><code>$ bin/eagle-topology.sh --main 
org.apache.eagle.security.userprofile.UserProfileDetectionMain --config 
conf/sandbox-userprofile-topology.conf start
+    <div class="highlighter-rouge"><pre class="highlight"><code>$ 
bin/eagle-topology.sh --main 
org.apache.eagle.security.userprofile.UserProfileDetectionMain --config 
conf/sandbox-userprofile-topology.conf start
 </code></pre>
+    </div>
   </li>
   <li>
     <p><strong>Option 2</strong>: Apache Ambari</p>
@@ -278,23 +280,26 @@ $ bin/eagle-userprofile-scheduler.sh --s
   <li>Prepare sample data for ML training and validation sample data
     <ul>
       <li>a. Download following sample data to be used for training</li>
-      <li><a 
href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code>user1.hdfs-audit.2015-10-11-00.txt</code></a></li>
-      <li><a 
href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code>user1.hdfs-audit.2015-10-11-01.txt</code></a></li>
-      <li>b. Downlaod <a 
href="/data/userprofile-validate.txt"><code>userprofile-validate.txt</code></a>file
 which contains data points that you can try to test the models</li>
+    </ul>
+    <ul>
+      <li><a href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code 
class="highlighter-rouge">user1.hdfs-audit.2015-10-11-00.txt</code></a></li>
+      <li><a href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code 
class="highlighter-rouge">user1.hdfs-audit.2015-10-11-01.txt</code></a>
+    * b. Downlaod <a href="/data/userprofile-validate.txt"><code 
class="highlighter-rouge">userprofile-validate.txt</code></a>file which 
contains data points that you can try to test the models</li>
     </ul>
   </li>
   <li>Copy the files (downloaded in the previous step) into a location in 
sandbox 
-For example: <code>/usr/hdp/current/eagle/lib/userprofile/data/</code></li>
-  <li>Modify <code>&lt;Eagle-home&gt;/conf/sandbox-userprofile-scheduler.conf 
</code>
-update <code>training-audit-path</code> to set to the path for training data 
sample (the path you used for Step 1.a)
+For example: <code 
class="highlighter-rouge">/usr/hdp/current/eagle/lib/userprofile/data/</code></li>
+  <li>Modify <code 
class="highlighter-rouge">&lt;Eagle-home&gt;/conf/sandbox-userprofile-scheduler.conf
 </code>
+update <code class="highlighter-rouge">training-audit-path</code> to set to 
the path for training data sample (the path you used for Step 1.a)
 update detection-audit-path to set to the path for validation (the path you 
used for Step 1.b)</li>
   <li>Run ML training program from eagle UI</li>
   <li>
     <p>Produce Apache Kafka data using the contents from validate file (Step 
1.b)
-Run the command (assuming the eagle configuration uses Kafka topic 
<code>sandbox_hdfs_audit_log</code>)</p>
+Run the command (assuming the eagle configuration uses Kafka topic <code 
class="highlighter-rouge">sandbox_hdfs_audit_log</code>)</p>
 
-    <pre><code> ./kafka-console-producer.sh --broker-list 
sandbox.hortonworks.com:6667 --topic sandbox_hdfs_audit_log
+    <div class="highlighter-rouge"><pre class="highlight"><code> 
./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic 
sandbox_hdfs_audit_log
 </code></pre>
+    </div>
   </li>
   <li>Paste few lines of data from file validate onto kafka-console-producer 
 Check <a 
href="http://localhost:9099/eagle-service/#/dam/alertList";>http://localhost:9099/eagle-service/#/dam/alertList</a>
 for generated alerts</li>

Modified: eagle/site/feed.xml
URL: 
http://svn.apache.org/viewvc/eagle/site/feed.xml?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/feed.xml (original)
+++ eagle/site/feed.xml Thu Jan 12 07:44:47 2017
@@ -5,9 +5,9 @@
     <description>Eagle - Analyze Big Data Platforms for Security and 
Performance</description>
     <link>http://goeagle.io/</link>
     <atom:link href="http://goeagle.io/feed.xml"; rel="self" 
type="application/rss+xml"/>
-    <pubDate>Tue, 03 Jan 2017 09:20:56 +0800</pubDate>
-    <lastBuildDate>Tue, 03 Jan 2017 09:20:56 +0800</lastBuildDate>
-    <generator>Jekyll v2.5.3</generator>
+    <pubDate>Thu, 12 Jan 2017 15:28:13 +0800</pubDate>
+    <lastBuildDate>Thu, 12 Jan 2017 15:28:13 +0800</lastBuildDate>
+    <generator>Jekyll v3.3.1</generator>
     
       <item>
         <title>Apache Eagle 正式发布:分布式实时Hadoop数据安å…
¨æ–¹æ¡ˆ</title>
@@ -17,7 +17,7 @@
 
 &lt;p&gt;日前,eBayå…
¬å¸éš†é‡å®£å¸ƒæ­£å¼å‘开源业界推出分布式实时安全监控方案 
- Apache Eagle (http://goeagle.io),该项目已于2015年10月26日正式加
入Apache 成为孵化器项目。Apache 
Eagle提供一套高效分布式的流式策略引擎,å…
·æœ‰é«˜å®žæ—¶ã€å¯ä¼¸ç¼©ã€æ˜“扩展、交互友好等特点,同时集成机器学ä¹
 
对用户行为建立Profile以实现智能实时地保护Hadoop生态系统中大数据的安å
…¨ã€‚&lt;/p&gt;
 
-&lt;h2 id=&quot;section&quot;&gt;背景&lt;/h2&gt;
+&lt;h2 id=&quot;背景&quot;&gt;背景&lt;/h2&gt;
 &lt;p&gt;随着大数据的发展,越来越多的成功企业或者
组织开始采取数据驱动商业的运作模式。在eBay,我们拥有数万名工程师、分析师和数据科学家,他们每天访问分析数PB级的数据,以为我们的用户带来æ—
 ä¸Žä¼¦æ¯”的体验。在å…
¨çƒä¸šåŠ¡ä¸­ï¼Œæˆ‘ä»¬ä¹Ÿå¹¿æ³›åœ°åˆ©ç”¨æµ·é‡å¤§æ•°æ®æ¥è¿žæŽ¥æˆ‘ä»¬æ•°ä»¥äº¿è®¡çš„ç”¨æˆ·ã€‚&lt;/p&gt;
 
 
&lt;p&gt;近年来,Hadoop已经逐渐成为大数据分析领域最受欢迎的解决方案,eBay也一直在使用Hadoop技术从数据中挖掘价值,例如,我们通过大数据提高用户的搜索体验,识别和优化精准广告投放,å
……
实我们的产品目录,以及通过点击流分析以理解用户如何使用我们的在线市场平台等。&lt;/p&gt;
@@ -54,20 +54,20 @@
   &lt;li&gt;&lt;strong&gt;开源&lt;/strong&gt;:Eagle一直根据开源的æ 
‡å‡†å¼€å‘,并构建于诸多大数据领域的开源产品之上,因
此我们决定以Apache许可证开源Eagle,以回馈社区,同时也期待
获得社区的反馈、协作与支持。&lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;h2 id=&quot;eagle&quot;&gt;Eagle概览&lt;/h2&gt;
+&lt;h2 id=&quot;eagle概览&quot;&gt;Eagle概览&lt;/h2&gt;
 
 &lt;p&gt;&lt;img src=&quot;/images/posts/eagle-group.png&quot; 
alt=&quot;&quot; /&gt;&lt;/p&gt;
 
-&lt;h4 id=&quot;data-collection-and-storage&quot;&gt;数据流接å…
¥å’Œå­˜å‚¨ï¼ˆData Collection and Storage)&lt;/h4&gt;
+&lt;h4 id=&quot;数据流接å…
¥å’Œå­˜å‚¨data-collection-and-storage&quot;&gt;数据流接入和存储(Data 
Collection and Storage)&lt;/h4&gt;
 
&lt;p&gt;Eagle提供高度可扩展的编程API,可以支持将任何类型的数据源集成到Eagle的策略执行引擎中。例如,在Eagle
 HDFS 
审计事件(Audit)监控模块中,通过Kafka来实时接收来自Namenode
 Log4j Appender 或者 Logstash Agent 收集的数据;在Eagle Hive 
监控模块中,通过YARN API 收集正在运行Job的Hive 
查询日志,并保证比较高的可伸缩性和容错性。&lt;/p&gt;
 
-&lt;h4 id=&quot;data-processing&quot;&gt;数据实时处理(Data 
Processing)&lt;/h4&gt;
+&lt;h4 
id=&quot;数据实时处理data-processing&quot;&gt;数据实时处理(Data 
Processing)&lt;/h4&gt;
 
 &lt;p&gt;&lt;strong&gt;流处理API(Stream Processing 
API)Eagle&lt;/strong&gt; 
提供独立于物理平台而高度抽象的流处理API,目前默认支持Apache
 Storm,但是也允许扩展到其他任意流处理引擎,比如Flink 
或者 Samza等。该层抽象允许开发者
在定义监控数据处理逻辑时,无
需在物理执行层绑定任何特定流处理平台,而只需通过复用、拼接和组è£
…
例如数据转换、过滤、外部数据Join等组件,以实现满足需求的DAG(有向æ—
 çŽ¯å›¾ï¼‰ï¼ŒåŒæ—¶ï¼Œå¼€å‘�
 �€…也可以很容易地以编程地方式将业务逻辑流程和Eagle 
策略引擎框架集成起来。Eagle框架内
部会将描述业务逻辑的DAG编译成底层流处理架构的原生应用,例如Apache
 Storm Topology 等,从事实现平台的独立。&lt;/p&gt;
 
 
&lt;p&gt;&lt;strong&gt;以下是一个Eagle如何处理事件和告警的示例:&lt;/strong&gt;&lt;/p&gt;
 
-&lt;pre&gt;&lt;code&gt;StormExecutionEnvironment env = 
ExecutionEnvironmentFactory.getStorm(config); // storm env
+&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;StormExecutionEnvironment env = 
ExecutionEnvironmentFactory.getStorm(config); // storm env
 StreamProducer producer = env.newSource(new 
KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare 
kafka source
        .flatMap(new AuditLogTransformer()) // transform event
        .groupBy(Arrays.asList(0))  // group by 1st field
@@ -75,6 +75,7 @@ StreamProducer producer = env.newSource(
        .alertWithConsumer(“userActivity“,”userProfileExecutor“) // ML 
policy evaluation
 env.execute(); // execute stream processing and alert
 &lt;/code&gt;&lt;/pre&gt;
+&lt;/div&gt;
 
 &lt;p&gt;&lt;strong&gt;告警框架(Alerting 
Framework)Eagle&lt;/strong&gt;告警框架由流å…
ƒæ•°æ®API、策略引擎服务提供API、策略Partitioner API 
以及预警去重框架等组成:&lt;/p&gt;
 
@@ -84,7 +85,7 @@ env.execute(); // execute stream process
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;扩展性&lt;/strong&gt; 
Eagle的策略引擎服务提供API允许你插入新的策略引擎&lt;/p&gt;
 
-    &lt;pre&gt;&lt;code&gt;  public interface PolicyEvaluatorServiceProvider {
+    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;  public interface 
PolicyEvaluatorServiceProvider {
     public String getPolicyType();         // literal string to identify one 
type of policy
     public Class&amp;lt;? extends PolicyEvaluator&amp;gt; 
getPolicyEvaluator(); // get policy evaluator implementation
     public List&amp;lt;Module&amp;gt; getBindingModules();  // policy text 
with json format to object mapping
@@ -95,15 +96,17 @@ env.execute(); // execute stream process
     public void onPolicyDelete(); // invoked when policy is deleted
   }
 &lt;/code&gt;&lt;/pre&gt;
+    &lt;/div&gt;
   &lt;/li&gt;
   &lt;li&gt;&lt;strong&gt;策略Partitioner API&lt;/strong&gt; å…
è®¸ç­–略在不同的物理节点上并行执行。也允许你
自定义策略Partitioner类。这些功能使得策略和事件完å…
¨ä»¥åˆ†å¸ƒå¼çš„æ–¹å¼æ‰§è¡Œã€‚&lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;可伸缩性&lt;/strong&gt; Eagle 
通过支持策略的分区接口来实现大量的策略可伸缩并发地运行&lt;/p&gt;
 
-    &lt;pre&gt;&lt;code&gt;  public interface PolicyPartitioner extends 
Serializable {
+    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;  public interface PolicyPartitioner 
extends Serializable {
     int partition(int numTotalPartitions, String policyType, String policyId); 
// method to distribute policies
   }
 &lt;/code&gt;&lt;/pre&gt;
+    &lt;/div&gt;
 
     &lt;p&gt;&lt;img src=&quot;/images/posts/policy-partition.png&quot; 
alt=&quot;&quot; /&gt;&lt;/p&gt;
 
@@ -160,26 +163,29 @@ Eagle 支持根据用æˆ
   &lt;li&gt;
     
&lt;p&gt;单一事件执行策略(用户访问Hive中的敏感数据列)&lt;/p&gt;
 
-    &lt;pre&gt;&lt;code&gt;  from 
hiveAccessLogStream[sensitivityType==&#39;PHONE_NUMBER&#39;] select * insert 
into outputStream;
+    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;  from 
hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into 
outputStream;
 &lt;/code&gt;&lt;/pre&gt;
+    &lt;/div&gt;
   &lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;基于窗口的策略(用户在10分钟内访问目录 
/tmp/private 多余 5次)&lt;/p&gt;
 
-    &lt;pre&gt;&lt;code&gt;  hdfsAuditLogEventStream[(src == 
&#39;/tmp/private&#39;)]#window.externalTime(timestamp,10 min) select user, 
count(timestamp) as aggValue group by user having aggValue &amp;gt;= 5 insert 
into outputStream;
+    &lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;  hdfsAuditLogEventStream[(src == 
'/tmp/private')]#window.externalTime(timestamp,10 min) select user, 
count(timestamp) as aggValue group by user having aggValue &amp;gt;= 5 insert 
into outputStream;
 &lt;/code&gt;&lt;/pre&gt;
+    &lt;/div&gt;
   &lt;/li&gt;
 &lt;/ul&gt;
 
 &lt;p&gt;&lt;strong&gt;查询服务(Query Service)&lt;/strong&gt; Eagle 
提供类SQL的REST 
API用来实现针对海量数据集的综合计算、查询和分析的能力,支持例如过滤、聚合、直方运算、排序、top、算术表达式以及分页等。Eagle优å
…ˆæ”¯æŒHBase 作为其默认数据存储,但是同时也支持基JDBC的å…
³ç³»åž‹æ•°æ®åº“。特别是当选择以HBase作为存储时,Eagle便原生拥有了HBase存储和查询海量监控数据的能力,Eagle
 查询框架会将用户提供的类SQL查询语法最ç»
 ˆç¼–译成为HBase 原生的Filter 对象,并支持通过HBase 
Coprocessor进一步提升响应速度。&lt;/p&gt;
 
-&lt;pre&gt;&lt;code&gt;query=AlertDefinitionService[@dataSource=&quot;hiveQueryLog&quot;]{@policyDef}&amp;amp;pageSize=100000
+&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;pre 
class=&quot;highlight&quot;&gt;&lt;code&gt;query=AlertDefinitionService[@dataSource=&quot;hiveQueryLog&quot;]{@policyDef}&amp;amp;pageSize=100000
 &lt;/code&gt;&lt;/pre&gt;
+&lt;/div&gt;
 
-&lt;h2 id=&quot;eagleebay&quot;&gt;Eagle在eBay的使用场景&lt;/h2&gt;
+&lt;h2 
id=&quot;eagle在ebay的使用场景&quot;&gt;Eagle在eBay的使用场景&lt;/h2&gt;
 
&lt;p&gt;目前,Eagle的数据行为监控系统已经部署到一个拥有2500多个节点的Hadoop集群之上,用以保护数百PB数据的安å
…¨ï¼Œå¹¶æ­£è®¡åˆ’于今年年底之前扩展到å…
¶ä»–上十个Hadoop集群上,从而覆盖eBay 
所有主要Hadoop的10000多台节点。在我们的生产环境中,我们已针对HDFS、Hive
 等集群中的数据配置了一些基础的安å…
¨ç­–略,并将于年底之前不断引å…
¥æ›´å¤šçš„策略,以确保重要数据的绝对安å…
¨ã€‚目前,Eagle的策略涵盖多ç§�
 �模式,包
括从访问模式、频繁访问数据集,预定义查询类型、Hive 
表和列、HBase 表以及基于机器学习模型生成的用户Profile相å…
³çš„æ‰€æœ‰ç­–略等。同时,我们也有广泛的策略来防止数据的丢失、数据被拷贝到不安å
…
¨åœ°ç‚¹ã€æ•æ„Ÿæ•°æ®è¢«æœªæŽˆæƒåŒºåŸŸè®¿é—®ç­‰ã€‚Eagle策略定义上极大的灵活性和扩展性使得我们未来可以轻易地继续扩展更多更复杂的策略以支持更多多å
…ƒåŒ–的用例场景。&lt;/p&gt;
 
-&lt;h2 id=&quot;section-1&quot;&gt;后续计划&lt;/h2&gt;
+&lt;h2 id=&quot;后续计划&quot;&gt;后续计划&lt;/h2&gt;
 &lt;p&gt;过去两年中,在eBay 
除了被用于数据行为监控以外,Eagle æ 
¸å¿ƒæ¡†æž¶è¿˜è¢«å¹¿æ³›ç”¨äºŽç›‘控节点健康状况、Hadoop应用性能指æ 
‡ã€Hadoop æ 
¸å¿ƒæœåŠ¡ä»¥åŠæ•´ä¸ªHadoop集群的健康状况等诸多领域。我们还建立一系列的自动化机制,例如节点修复等,帮助我们平台部门极大得节省了我们人工劳力,并有效地提升了整个集群资源地利用率。&lt;/p&gt;
 
 &lt;p&gt;以下是我们目前正在开发中地一些特性:&lt;/p&gt;
@@ -196,7 +202,7 @@ Eagle 支持根据用æˆ
   &lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;h2 id=&quot;section-2&quot;&gt;关于作者&lt;/h2&gt;
+&lt;h2 id=&quot;关于作者&quot;&gt;关于作者&lt;/h2&gt;
 &lt;p&gt;&lt;a 
href=&quot;https://github.com/haoch&quot;&gt;陈浩&lt;/a&gt;,Apache Eagle 
Committer 和 PMC 成员,eBay 
分析平台基础架构部门高级软件工程师,负责Eagle的产品设计、技术架构、æ
 ¸å¿ƒå®žçŽ°ä»¥åŠå¼€æºç¤¾åŒºæŽ¨å¹¿ç­‰ã€‚&lt;/p&gt;
 
 &lt;p&gt;感谢以下来自Apache Eagle社区和eBay公司的联合作者
们对本文的贡献:&lt;/p&gt;
@@ -210,7 +216,7 @@ Eagle 支持根据用æˆ
 
 &lt;p&gt;eBay 分析平台基础架构部(Analytics Data 
Infrastructure)是eBay的å…
¨çƒæ•°æ®åŠåˆ†æžåŸºç¡€æž¶æž„部门,负责eBay在数据库、数据仓库、Hadoop、商务智能以及机器学ä¹
 ç­‰å„个数据平台开发、管理等,支持eBayå…
¨çƒå„部门运用高端的数据分析解决方案作出及时有效的作业决策,为遍布å
…¨çƒçš„业务用户提供数据分析解决方案。&lt;/p&gt;
 
-&lt;h2 id=&quot;section-3&quot;&gt;参考资料&lt;/h2&gt;
+&lt;h2 id=&quot;参考资料&quot;&gt;参考资料&lt;/h2&gt;
 
 &lt;ul&gt;
   &lt;li&gt;Apache Eagle 文档:&lt;a 
href=&quot;http://goeagle.io&quot;&gt;http://goeagle.io&lt;/a&gt;&lt;/li&gt;
@@ -218,7 +224,7 @@ Eagle 支持根据用æˆ
   &lt;li&gt;Apache Eagle 项目:&lt;a 
href=&quot;http://incubator.apache.org/projects/eagle.html&quot;&gt;http://incubator.apache.org/projects/eagle.html&lt;/a&gt;&lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;h2 id=&quot;section-4&quot;&gt;引用链接&lt;/h2&gt;
+&lt;h2 id=&quot;引用链接&quot;&gt;引用链接&lt;/h2&gt;
 &lt;ul&gt;
   &lt;li&gt;&lt;strong&gt;CSDN&lt;/strong&gt;: &lt;a 
href=&quot;http://www.csdn.net/article/2015-10-29/2826076&quot;&gt;http://www.csdn.net/article/2015-10-29/2826076&lt;/a&gt;&lt;/li&gt;
   &lt;li&gt;&lt;strong&gt;OSCHINA&lt;/strong&gt;: &lt;a 
href=&quot;http://www.oschina.net/news/67515/apache-eagle&quot;&gt;http://www.oschina.net/news/67515/apache-eagle&lt;/a&gt;&lt;/li&gt;

Modified: eagle/site/index.html
URL: 
http://svn.apache.org/viewvc/eagle/site/index.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/index.html (original)
+++ eagle/site/index.html Thu Jan 12 07:44:47 2017
@@ -108,7 +108,10 @@
         <br/>
         <p style="width:80%; margin-left:auto; margin-right:auto;"> Big data 
platform normally generates huge amount of operational logs and metrics in 
realtime. Eagle is founded to solve hard problems in securing and tuning 
performance for big data platforms by ensuring metrics, logs always available 
and alerting immediately even under huge traffic.</p>
         <div class="sepline"></div>
-        <P>Eagle has been accepted as an Apache Incubator Project on Oct 26, 
2015.</P>
+        <P>Eagle is accounced to be a Top Level Project (TLP) of Apache 
Software Foundation (ASF) on Jan. 10, 2017.</p>
+        <!-- 
+        <p>Eagle has been accepted as an Apache Incubator Project on Oct 26, 
2015.</P>
+         -->
         <div class="sepline"></div>
         <p>Eagle analyzes big data platforms and reports issues in 3 steps:</p>
       </div>

Modified: eagle/site/post/2015/10/27/apache-eagle-announce-cn.html
URL: 
http://svn.apache.org/viewvc/eagle/site/post/2015/10/27/apache-eagle-announce-cn.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/post/2015/10/27/apache-eagle-announce-cn.html (original)
+++ eagle/site/post/2015/10/27/apache-eagle-announce-cn.html Thu Jan 12 
07:44:47 2017
@@ -93,7 +93,7 @@
 
 <p>日前,eBayå…
¬å¸éš†é‡å®£å¸ƒæ­£å¼å‘开源业界推出分布式实时安全监控方案 
- Apache Eagle (http://goeagle.io),该项目已于2015年10月26日正式加
入Apache 成为孵化器项目。Apache 
Eagle提供一套高效分布式的流式策略引擎,å…
·æœ‰é«˜å®žæ—¶ã€å¯ä¼¸ç¼©ã€æ˜“扩展、交互友好等特点,同时集成机器学ä¹
 
对用户行为建立Profile以实现智能实时地保护Hadoop生态系统中大数据的安å
…¨ã€‚</p>
 
-<h2 id="section">背景</h2>
+<h2 id="背景">背景</h2>
 <p>随着大数据的发展,越来越多的成功企业或者
组织开始采取数据驱动商业的运作模式。在eBay,我们拥有数万名工程师、分析师和数据科学家,他们每天访问分析数PB级的数据,以为我们的用户带来æ—
 ä¸Žä¼¦æ¯”的体验。在å…
¨çƒä¸šåŠ¡ä¸­ï¼Œæˆ‘ä»¬ä¹Ÿå¹¿æ³›åœ°åˆ©ç”¨æµ·é‡å¤§æ•°æ®æ¥è¿žæŽ¥æˆ‘ä»¬æ•°ä»¥äº¿è®¡çš„ç”¨æˆ·ã€‚</p>
 
 
<p>近年来,Hadoop已经逐渐成为大数据分析领域最受欢迎的解决方案,eBay也一直在使用Hadoop技术从数据中挖掘价值,例如,我们通过大数据提高用户的搜索体验,识别和优化精准广告投放,å
……
实我们的产品目录,以及通过点击流分析以理解用户如何使用我们的在线市场平台等。</p>
@@ -130,20 +130,20 @@
   <li><strong>开源</strong>:Eagle一直根据开源的æ 
‡å‡†å¼€å‘,并构建于诸多大数据领域的开源产品之上,因
此我们决定以Apache许可证开源Eagle,以回馈社区,同时也期待
获得社区的反馈、协作与支持。</li>
 </ul>
 
-<h2 id="eagle">Eagle概览</h2>
+<h2 id="eagle概览">Eagle概览</h2>
 
 <p><img src="/images/posts/eagle-group.png" alt="" /></p>
 
-<h4 id="data-collection-and-storage">数据流接入和存储(Data 
Collection and Storage)</h4>
+<h4 id="数据流接入和存储data-collection-and-storage">数据流接å…
¥å’Œå­˜å‚¨ï¼ˆData Collection and Storage)</h4>
 
<p>Eagle提供高度可扩展的编程API,可以支持将任何类型的数据源集成到Eagle的策略执行引擎中。例如,在Eagle
 HDFS 
审计事件(Audit)监控模块中,通过Kafka来实时接收来自Namenode
 Log4j Appender 或者 Logstash Agent 收集的数据;在Eagle Hive 
监控模块中,通过YARN API 收集正在运行Job的Hive 
查询日志,并保证比较高的可伸缩性和容错性。</p>
 
-<h4 id="data-processing">数据实时处理(Data Processing)</h4>
+<h4 id="数据实时处理data-processing">数据实时处理(Data 
Processing)</h4>
 
 <p><strong>流处理API(Stream Processing API)Eagle</strong> 
提供独立于物理平台而高度抽象的流处理API,目前默认支持Apache
 Storm,但是也允许扩展到其他任意流处理引擎,比如Flink 
或者 Samza等。该层抽象允许开发者
在定义监控数据处理逻辑时,无
需在物理执行层绑定任何特定流处理平台,而只需通过复用、拼接和组è£
…
例如数据转换、过滤、外部数据Join等组件,以实现满足需求的DAG(有向æ—
 çŽ¯å›¾ï¼‰ï¼ŒåŒæ—¶ï¼Œå¼€å‘è€…ä¹Ÿå¯�
 �»¥å¾ˆå®¹æ˜“地以编程地方式将业务逻辑流程和Eagle 
策略引擎框架集成起来。Eagle框架内
部会将描述业务逻辑的DAG编译成底层流处理架构的原生应用,例如Apache
 Storm Topology 等,从事实现平台的独立。</p>
 
 
<p><strong>以下是一个Eagle如何处理事件和告警的示例:</strong></p>
 
-<pre><code>StormExecutionEnvironment env = 
ExecutionEnvironmentFactory.getStorm(config); // storm env
+<div class="highlighter-rouge"><pre 
class="highlight"><code>StormExecutionEnvironment env = 
ExecutionEnvironmentFactory.getStorm(config); // storm env
 StreamProducer producer = env.newSource(new 
KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare 
kafka source
        .flatMap(new AuditLogTransformer()) // transform event
        .groupBy(Arrays.asList(0))  // group by 1st field
@@ -151,6 +151,7 @@ StreamProducer producer = env.newSource(
        .alertWithConsumer(“userActivity“,”userProfileExecutor“) // ML 
policy evaluation
 env.execute(); // execute stream processing and alert
 </code></pre>
+</div>
 
 <p><strong>告警框架(Alerting 
Framework)Eagle</strong>告警框架由流å…
ƒæ•°æ®API、策略引擎服务提供API、策略Partitioner API 
以及预警去重框架等组成:</p>
 
@@ -160,7 +161,7 @@ env.execute(); // execute stream process
   <li>
     <p><strong>扩展性</strong> Eagle的策略引擎服务提供API允许你
插入新的策略引擎</p>
 
-    <pre><code>  public interface PolicyEvaluatorServiceProvider {
+    <div class="highlighter-rouge"><pre class="highlight"><code>  public 
interface PolicyEvaluatorServiceProvider {
     public String getPolicyType();         // literal string to identify one 
type of policy
     public Class&lt;? extends PolicyEvaluator&gt; getPolicyEvaluator(); // get 
policy evaluator implementation
     public List&lt;Module&gt; getBindingModules();  // policy text with json 
format to object mapping
@@ -171,15 +172,17 @@ env.execute(); // execute stream process
     public void onPolicyDelete(); // invoked when policy is deleted
   }
 </code></pre>
+    </div>
   </li>
   <li><strong>策略Partitioner API</strong> å…
è®¸ç­–略在不同的物理节点上并行执行。也允许你
自定义策略Partitioner类。这些功能使得策略和事件完å…
¨ä»¥åˆ†å¸ƒå¼çš„æ–¹å¼æ‰§è¡Œã€‚</li>
   <li>
     <p><strong>可伸缩性</strong> Eagle 
通过支持策略的分区接口来实现大量的策略可伸缩并发地运行</p>
 
-    <pre><code>  public interface PolicyPartitioner extends Serializable {
+    <div class="highlighter-rouge"><pre class="highlight"><code>  public 
interface PolicyPartitioner extends Serializable {
     int partition(int numTotalPartitions, String policyType, String policyId); 
// method to distribute policies
   }
 </code></pre>
+    </div>
 
     <p><img src="/images/posts/policy-partition.png" alt="" /></p>
 
@@ -236,26 +239,29 @@ Eagle 支持根据用æˆ
   <li>
     <p>单一事件执行策略(用户访问Hive中的敏感数据列)</p>
 
-    <pre><code>  from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] 
select * insert into outputStream;
+    <div class="highlighter-rouge"><pre class="highlight"><code>  from 
hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into 
outputStream;
 </code></pre>
+    </div>
   </li>
   <li>
     <p>基于窗口的策略(用户在10分钟内访问目录 /tmp/private 
多余 5次)</p>
 
-    <pre><code>  hdfsAuditLogEventStream[(src == 
'/tmp/private')]#window.externalTime(timestamp,10 min) select user, 
count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into 
outputStream;
+    <div class="highlighter-rouge"><pre class="highlight"><code>  
hdfsAuditLogEventStream[(src == 
'/tmp/private')]#window.externalTime(timestamp,10 min) select user, 
count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into 
outputStream;
 </code></pre>
+    </div>
   </li>
 </ul>
 
 <p><strong>查询服务(Query Service)</strong> Eagle 提供类SQL的REST 
API用来实现针对海量数据集的综合计算、查询和分析的能力,支持例如过滤、聚合、直方运算、排序、top、算术表达式以及分页等。Eagle优å
…ˆæ”¯æŒHBase 作为其默认数据存储,但是同时也支持基JDBC的å…
³ç³»åž‹æ•°æ®åº“。特别是当选择以HBase作为存储时,Eagle便原生拥有了HBase存储和查询海量监控数据的能力,Eagle
 查询框架会将用户提供的类SQL查询语法最终编译æˆ
 ä¸ºHBase 原生的Filter 对象,并支持通过HBase 
Coprocessor进一步提升响应速度。</p>
 
-<pre><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000
+<div class="highlighter-rouge"><pre 
class="highlight"><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000
 </code></pre>
+</div>
 
-<h2 id="eagleebay">Eagle在eBay的使用场景</h2>
+<h2 id="eagle在ebay的使用场景">Eagle在eBay的使用场景</h2>
 
<p>目前,Eagle的数据行为监控系统已经部署到一个拥有2500多个节点的Hadoop集群之上,用以保护数百PB数据的安å
…¨ï¼Œå¹¶æ­£è®¡åˆ’于今年年底之前扩展到å…
¶ä»–上十个Hadoop集群上,从而覆盖eBay 
所有主要Hadoop的10000多台节点。在我们的生产环境中,我们已针对HDFS、Hive
 等集群中的数据配置了一些基础的安å…
¨ç­–略,并将于年底之前不断引å…
¥æ›´å¤šçš„策略,以确保重要数据的绝对安å…
¨ã€‚目前,Eagle的策略涵盖多种æ¨�
 �式,包
括从访问模式、频繁访问数据集,预定义查询类型、Hive 
表和列、HBase 表以及基于机器学习模型生成的用户Profile相å…
³çš„æ‰€æœ‰ç­–略等。同时,我们也有广泛的策略来防止数据的丢失、数据被拷贝到不安å
…
¨åœ°ç‚¹ã€æ•æ„Ÿæ•°æ®è¢«æœªæŽˆæƒåŒºåŸŸè®¿é—®ç­‰ã€‚Eagle策略定义上极大的灵活性和扩展性使得我们未来可以轻易地继续扩展更多更复杂的策略以支持更多多å
…ƒåŒ–的用例场景。</p>
 
-<h2 id="section-1">后续计划</h2>
+<h2 id="后续计划">后续计划</h2>
 <p>过去两年中,在eBay 除了被用于数据行为监控以外,Eagle æ 
¸å¿ƒæ¡†æž¶è¿˜è¢«å¹¿æ³›ç”¨äºŽç›‘控节点健康状况、Hadoop应用性能指æ 
‡ã€Hadoop æ 
¸å¿ƒæœåŠ¡ä»¥åŠæ•´ä¸ªHadoop集群的健康状况等诸多领域。我们还建立一系列的自动化机制,例如节点修复等,帮助我们平台部门极大得节省了我们人工劳力,并有效地提升了整个集群资源地利用率。</p>
 
 <p>以下是我们目前正在开发中地一些特性:</p>
@@ -272,7 +278,7 @@ Eagle 支持根据用æˆ
   </li>
 </ul>
 
-<h2 id="section-2">关于作者</h2>
+<h2 id="关于作者">关于作者</h2>
 <p><a href="https://github.com/haoch";>陈浩</a>,Apache Eagle Committer 和 
PMC 成员,eBay 
分析平台基础架构部门高级软件工程师,负责Eagle的产品设计、技术架构、æ
 ¸å¿ƒå®žçŽ°ä»¥åŠå¼€æºç¤¾åŒºæŽ¨å¹¿ç­‰ã€‚</p>
 
 <p>感谢以下来自Apache Eagle社区和eBay公司的联合作者
们对本文的贡献:</p>
@@ -286,7 +292,7 @@ Eagle 支持根据用æˆ
 
 <p>eBay 分析平台基础架构部(Analytics Data 
Infrastructure)是eBay的å…
¨çƒæ•°æ®åŠåˆ†æžåŸºç¡€æž¶æž„部门,负责eBay在数据库、数据仓库、Hadoop、商务智能以及机器学ä¹
 ç­‰å„个数据平台开发、管理等,支持eBayå…
¨çƒå„部门运用高端的数据分析解决方案作出及时有效的作业决策,为遍布å
…¨çƒçš„业务用户提供数据分析解决方案。</p>
 
-<h2 id="section-3">参考资料</h2>
+<h2 id="参考资料">参考资料</h2>
 
 <ul>
   <li>Apache Eagle 文档:<a 
href="http://goeagle.io";>http://goeagle.io</a></li>
@@ -294,7 +300,7 @@ Eagle 支持根据用æˆ
   <li>Apache Eagle 项目:<a 
href="http://incubator.apache.org/projects/eagle.html";>http://incubator.apache.org/projects/eagle.html</a></li>
 </ul>
 
-<h2 id="section-4">引用链接</h2>
+<h2 id="引用链接">引用链接</h2>
 <ul>
   <li><strong>CSDN</strong>: <a 
href="http://www.csdn.net/article/2015-10-29/2826076";>http://www.csdn.net/article/2015-10-29/2826076</a></li>
   <li><strong>OSCHINA</strong>: <a 
href="http://www.oschina.net/news/67515/apache-eagle";>http://www.oschina.net/news/67515/apache-eagle</a></li>

Modified: eagle/site/sup/index.html
URL: 
http://svn.apache.org/viewvc/eagle/site/sup/index.html?rev=1778394&r1=1778393&r2=1778394&view=diff
==============================================================================
--- eagle/site/sup/index.html (original)
+++ eagle/site/sup/index.html Thu Jan 12 07:44:47 2017
@@ -131,7 +131,7 @@
         <h1 class="page-header" style="margin-top: 0px">Apache Eagle 
Security</h1>
         <p>The Apache Software Foundation takes a very active stance in 
eliminating security problems in its software products. Apache Eagle is also 
responsive to such issues around its features.</p>
 
-<p>If you have any concern regarding to Eagle’s Security or you believe a 
vulnerability is discovered, don’t hesitate to get connected with Aapche 
Security Team by sending emails to <a 
href="&#109;&#097;&#105;&#108;&#116;&#111;:&#115;&#101;&#099;&#117;&#114;&#105;&#116;&#121;&#064;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#115;&#101;&#099;&#117;&#114;&#105;&#116;&#121;&#064;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a>.
 In the message, you can indicate the project name is Eagle, provide a 
description of the issue, and you are recommended to give the way of 
reproducing it. The security team and eagle community will get back to you 
after assessing the findings.</p>
+<p>If you have any concern regarding to Eagle’s Security or you believe a 
vulnerability is discovered, don’t hesitate to get connected with Aapche 
Security Team by sending emails to <a 
href="mailto:[email protected]";>[email protected]</a>. In the message, you 
can indicate the project name is Eagle, provide a description of the issue, and 
you are recommended to give the way of reproducing it. The security team and 
eagle community will get back to you after assessing the findings.</p>
 
 <blockquote>
   <p><strong>PLEASE PAY ATTENTION</strong> to report any security problem to 
the security email address before disclosing it publicly.</p>


Reply via email to