http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/installation.html ---------------------------------------------------------------------- diff --git a/_site/docs/installation.html b/_site/docs/installation.html index adf5c4c..6d3b8b7 100644 --- a/_site/docs/installation.html +++ b/_site/docs/installation.html @@ -180,16 +180,19 @@ <h4 id="install-eagle">Install Eagle</h4> <ul> - <li> - <p><strong>Step 1</strong>: Clone stable version from <a href="https://github.com/apache/eagle/releases/tag/v0.4.0-incubating">eagle github</a> -> Build project mvn clean install -DskipTests=true</p> + <li><strong>Step 1</strong>: Clone stable version from <a href="https://github.com/apache/eagle/releases/tag/v0.4.0-incubating">eagle github</a> + <blockquote> + <div class="highlighter-rouge"><pre class="highlight"><code> Build project mvn clean install -DskipTests=true +</code></pre> + </div> + </blockquote> </li> <li> <p><strong>Step 2</strong>: Download eagle-bin-0.1.0.tar.gz package from successful build into your HDP sandbox.</p> <ul> <li> - <p>Option 1: <code>scp -P 2222 eagle/eagle-assembly/target/eagle-0.1.0-bin.tar.gz root@127.0.0.1:/usr/hdp/current/</code></p> + <p>Option 1: <code class="highlighter-rouge">scp -P 2222 eagle/eagle-assembly/target/eagle-0.1.0-bin.tar.gz root@127.0.0.1:/usr/hdp/current/</code></p> </li> <li> <p>Option 2: Create shared directory between host and Sandbox, and restart Sandbox. Then you can find the shared directory under /media in Sandbox.</p> @@ -199,27 +202,30 @@ <li> <p><strong>Step 3</strong>: Extract eagle tarball package</p> - <pre><code>$ cd /usr/hdp/current + <div class="highlighter-rouge"><pre class="highlight"><code>$ cd /usr/hdp/current $ tar -zxvf eagle-0.1.0-bin.tar.gz $ mv eagle-0.1.0 eagle </code></pre> + </div> </li> <li> <p><strong>Step 4</strong>: Add root as a HBase<sup id="fnref:HBASE"><a href="#fn:HBASE" class="footnote">1</a></sup> superuser via <a href="http://127.0.0.1:8080/#/main/services/HBASE/configs">Ambari</a> (Optional, a user can operate HBase by sudo su hbase, as an alternative).</p> </li> - <li> - <p><strong>Step 5</strong>: Install Eagle Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">2</a></sup> service -> - /usr/hdp/current/eagle/bin/eagle-ambari.sh install.</p> + <li><strong>Step 5</strong>: Install Eagle Ambari<sup id="fnref:AMBARI"><a href="#fn:AMBARI" class="footnote">2</a></sup> service + <blockquote> + + <p>/usr/hdp/current/eagle/bin/eagle-ambari.sh install.</p> + </blockquote> </li> <li> <p><strong>Step 6</strong>: Restart <a href="http://127.0.0.1:8000/">Ambari</a> click on disable and enable Ambari back.</p> </li> - <li> - <p><strong>Step 7</strong>: Start HBase & Storm<sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">3</a></sup> & Kafka<sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">4</a></sup> -From Ambari UI, restart any suggested components(âRestart button on topâ) & Start Storm (Start âNimbusâ ,âSupervisorâ & âStorm UI Serverâ), Kafka (Start âKafka Brokerâ) , HBase (Start âRegionServerâ and â HBase Masterâ) -> -<img src="/images/docs/Services.png" alt="Restart Services" title="Services" /></p> + <li><strong>Step 7</strong>: Start HBase & Storm<sup id="fnref:STORM"><a href="#fn:STORM" class="footnote">3</a></sup> & Kafka<sup id="fnref:KAFKA"><a href="#fn:KAFKA" class="footnote">4</a></sup> +From Ambari UI, restart any suggested components(âRestart button on topâ) & Start Storm (Start âNimbusâ ,âSupervisorâ & âStorm UI Serverâ), Kafka (Start âKafka Brokerâ) , HBase (Start âRegionServerâ and â HBase Masterâ) + <blockquote> + + <p><img src="/images/docs/Services.png" alt="Restart Services" title="Services" /></p> + </blockquote> </li> <li> <p><strong>Step 8</strong>: Add Eagle Service To Ambari. (Click For Video)</p> @@ -241,9 +247,10 @@ EagleServiceSuccess</p> <li> <p><strong>Step 9</strong>: Add Policies and meta data required by running below script.</p> - <pre><code>$ /usr/hdp/current/eagle/examples/sample-sensitivity-resource-create.sh + <div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/eagle/examples/sample-sensitivity-resource-create.sh $ /usr/hdp/current/eagle/examples/sample-policy-create.sh </code></pre> + </div> </li> </ul> @@ -254,16 +261,16 @@ $ /usr/hdp/current/eagle/examples/sample-policy-create.sh <div class="footnotes"> <ol> <li id="fn:HBASE"> - <p><em>All mentions of âhbaseâ on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhbaseâ on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">↩</a></p> </li> <li id="fn:AMBARI"> - <p><em>All mentions of âambariâ on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">↩</a></p> + <p><em>All mentions of âambariâ on this page represent Apache Ambari.</em> <a href="#fnref:AMBARI" class="reversefootnote">↩</a></p> </li> <li id="fn:STORM"> - <p><em>All mentions of âstormâ on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">↩</a></p> + <p><em>All mentions of âstormâ on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">↩</a></p> </li> <li id="fn:KAFKA"> - <p><em>All mentions of âkafkaâ on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">↩</a></p> + <p><em>All mentions of âkafkaâ on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">↩</a></p> </li> </ol> </div>
http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/jmx-metric-monitoring.html ---------------------------------------------------------------------- diff --git a/_site/docs/jmx-metric-monitoring.html b/_site/docs/jmx-metric-monitoring.html index bd6b2d8..9574f54 100644 --- a/_site/docs/jmx-metric-monitoring.html +++ b/_site/docs/jmx-metric-monitoring.html @@ -177,9 +177,10 @@ <h3 id="setup"><strong>Setup</strong></h3> <p>From Hortonworks sandbox just run below setup script to Install Pyton JMX script, Create Kafka topic, update Apache Hbase tables and deploy âhadoopjmxâ Storm topology.</p> -<pre><code>$ /usr/hdp/current/eagle/examples/hadoop-metric-sandbox-starter.sh +<div class="highlighter-rouge"><pre class="highlight"><code>$ /usr/hdp/current/eagle/examples/hadoop-metric-sandbox-starter.sh $ /usr/hdp/current/eagle/examples/hadoop-metric-policy-create.sh </code></pre> +</div> <p><br /></p> @@ -204,15 +205,17 @@ $ /usr/hdp/current/eagle/examples/hadoop-metric-policy-create.sh <li> <p>First make sure that Kafka topic ânn_jmx_metric_sandboxâ is populated with JMX metric data periodically.(To make sure that python script is running)</p> - <pre><code> $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic nn_jmx_metric_sandbox + <div class="highlighter-rouge"><pre class="highlight"><code> $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-consumer.sh --zookeeper sandbox.hortonworks.com:2181 --topic nn_jmx_metric_sandbox </code></pre> + </div> </li> <li> <p>Genrate Alert by producing alert triggering message into Kafka topic.</p> - <pre><code> $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic nn_jmx_metric_sandbox + <div class="highlighter-rouge"><pre class="highlight"><code> $ /usr/hdp/2.2.4.2-2/kafka/bin/kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic nn_jmx_metric_sandbox $ {"host": "localhost", "timestamp": 1457033916718, "metric": "hadoop.namenode.fsnamesystemstate.fsstate", "component": "namenode", "site": "sandbox", "value": 1.0} </code></pre> + </div> </li> </ul> @@ -223,10 +226,10 @@ $ /usr/hdp/current/eagle/examples/hadoop-metric-policy-create.sh <div class="footnotes"> <ol> <li id="fn:KAFKA"> - <p><em>All mentions of âkafkaâ on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">↩</a></p> + <p><em>All mentions of âkafkaâ on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">↩</a></p> </li> <li id="fn:STORM"> - <p><em>All mentions of âstormâ on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">↩</a></p> + <p><em>All mentions of âstormâ on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/mapr-integration.html ---------------------------------------------------------------------- diff --git a/_site/docs/mapr-integration.html b/_site/docs/mapr-integration.html index bb0a9dd..116d119 100644 --- a/_site/docs/mapr-integration.html +++ b/_site/docs/mapr-integration.html @@ -171,84 +171,97 @@ <p>Here are the steps to follow:</p> <h4 id="step1-enable-audit-logs-for-filesystem-operations-and-table-operations-in-mapr">Step1: Enable audit logs for FileSystem Operations and Table Operations in MapR</h4> -<p>First we need to enable data auditing at all three levels: cluster level, volume level and directory,file or table level. -##### Cluster level:</p> +<p>First we need to enable data auditing at all three levels: cluster level, volume level and directory,file or table level.</p> +<h5 id="cluster-level">Cluster level:</h5> -<pre><code> $ maprcli audit data -cluster <cluster name> -enabled true +<div class="highlighter-rouge"><pre class="highlight"><code> $ maprcli audit data -cluster <cluster name> -enabled true [ -maxsize <GB, defaut value is 32. When size of audit logs exceed this number, an alarm will be sent to the dashboard in the MapR Control Service > ] [ -retention <number of Days> ] </code></pre> +</div> <p>Example:</p> -<pre><code> $ maprcli audit data -cluster mapr.cluster.com -enabled true -maxsize 30 -retention 30 +<div class="highlighter-rouge"><pre class="highlight"><code> $ maprcli audit data -cluster mapr.cluster.com -enabled true -maxsize 30 -retention 30 </code></pre> +</div> <h5 id="volume-level">Volume level:</h5> -<pre><code> $ maprcli volume audit -cluster <cluster name> -enabled true +<div class="highlighter-rouge"><pre class="highlight"><code> $ maprcli volume audit -cluster <cluster name> -enabled true -name <volume name> [ -coalesce <interval in minutes, the interval of time during which READ, WRITE, or GETATTR operations on one file from one client IP address are logged only once, if auditing is enabled> ] </code></pre> +</div> <p>Example:</p> -<pre><code> $ maprcli volume audit -cluster mapr.cluster.com -name mapr.tmp -enabled true +<div class="highlighter-rouge"><pre class="highlight"><code> $ maprcli volume audit -cluster mapr.cluster.com -name mapr.tmp -enabled true </code></pre> +</div> <p>To verify that auditing is enabled for a particular volume, use this command:</p> -<pre><code> $ maprcli volume info -name <volume name> -json | grep -i 'audited\|coalesce' +<div class="highlighter-rouge"><pre class="highlight"><code> $ maprcli volume info -name <volume name> -json | grep -i 'audited\|coalesce' </code></pre> +</div> <p>and you should see something like this:</p> -<pre><code> "audited":1, +<div class="highlighter-rouge"><pre class="highlight"><code> "audited":1, "coalesceInterval":60 </code></pre> +</div> <p>If âauditedâ is â1â then auditing is enabled for this volume.</p> <h5 id="directory-file-or-mapr-db-table-level">Directory, file, or MapR-DB table level:</h5> -<pre><code> $ hadoop mfs -setaudit on <directory|file|table> +<div class="highlighter-rouge"><pre class="highlight"><code> $ hadoop mfs -setaudit on <directory|file|table> </code></pre> +</div> -<p>To check whether Auditing is Enabled for a Directory, File, or MapR-DB Table, use <code>$ hadoop mfs -ls</code> +<p>To check whether Auditing is Enabled for a Directory, File, or MapR-DB Table, use <code class="highlighter-rouge">$ hadoop mfs -ls</code> Example: -Before enable the audit log on file <code>/tmp/dir</code>, try <code>$ hadoop mfs -ls /tmp/dir</code>, you should see something like this:</p> +Before enable the audit log on file <code class="highlighter-rouge">/tmp/dir</code>, try <code class="highlighter-rouge">$ hadoop mfs -ls /tmp/dir</code>, you should see something like this:</p> -<pre><code>drwxr-xr-x Z U U - root root 0 2016-03-02 15:02 268435456 /tmp/dir +<div class="highlighter-rouge"><pre class="highlight"><code>drwxr-xr-x Z U U - root root 0 2016-03-02 15:02 268435456 /tmp/dir p 2050.32.131328 mapr2.da.dg:5660 mapr1.da.dg:5660 </code></pre> +</div> -<p>The second <code>U</code> means auditing on this file is not enabled. +<p>The second <code class="highlighter-rouge">U</code> means auditing on this file is not enabled. Enable auditing with this command:</p> -<pre><code>$ hadoop mfs -setaudit on /tmp/dir +<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop mfs -setaudit on /tmp/dir </code></pre> +</div> <p>Then check the auditing bit with :</p> -<pre><code>$ hadoop mfs -ls /tmp/dir +<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop mfs -ls /tmp/dir </code></pre> +</div> <p>you should see something like this:</p> -<pre><code>drwxr-xr-x Z U A - root root 0 2016-03-02 15:02 268435456 /tmp/dir +<div class="highlighter-rouge"><pre class="highlight"><code>drwxr-xr-x Z U A - root root 0 2016-03-02 15:02 268435456 /tmp/dir p 2050.32.131328 mapr2.da.dg:5660 mapr1.da.dg:5660 </code></pre> +</div> -<p>We can see the previous <code>U</code> has been changed to <code>A</code> which indicates auditing on this file is enabled.</p> +<p>We can see the previous <code class="highlighter-rouge">U</code> has been changed to <code class="highlighter-rouge">A</code> which indicates auditing on this file is enabled.</p> -<p><code>Important</code>: +<p><code class="highlighter-rouge">Important</code>: When a directory has been enabled auditing, directories/files located in this dir wonât inherit auditing, but a newly created file/dir (after enabling the auditing on this dir) in this directory will.</p> <h4 id="step2-stream-log-data-into-kafka-by-using-logstash">Step2: Stream log data into Kafka by using Logstash</h4> -<p>As MapR do not have name node, instead it use CLDB service, we have to use logstash to stream log data into Kafka. -- First find out the nodes that have CLDB service -- Then find out the location of audit log files, eg: <code>/mapr/mapr.cluster.com/var/mapr/local/mapr1.da.dg/audit/</code>, file names should be in this format: <code>FSAudit.log-2016-05-04-001.json</code> -- Created a logstash conf file and run it, following this doc<a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/docs/logstash-kafka-conf.md">Logstash-kafka</a></p> +<p>As MapR do not have name node, instead it use CLDB service, we have to use logstash to stream log data into Kafka.</p> +<ul> + <li>First find out the nodes that have CLDB service</li> + <li>Then find out the location of audit log files, eg: <code class="highlighter-rouge">/mapr/mapr.cluster.com/var/mapr/local/mapr1.da.dg/audit/</code>, file names should be in this format: <code class="highlighter-rouge">FSAudit.log-2016-05-04-001.json</code></li> + <li>Created a logstash conf file and run it, following this doc<a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/docs/logstash-kafka-conf.md">Logstash-kafka</a></li> +</ul> <h4 id="step3-set-up-maprfsauditlog-applicaiton-in-eagle-service">Step3: Set up maprFSAuditLog applicaiton in Eagle Service</h4> -<p>After Eagle Service gets started, create mapFSAuditLog application using: <code>$ ./maprFSAuditLog-init.sh</code>. By default it will create maprFSAuditLog in site âsandboxâ, you may need to change it to your own site. +<p>After Eagle Service gets started, create mapFSAuditLog application using: <code class="highlighter-rouge">$ ./maprFSAuditLog-init.sh</code>. By default it will create maprFSAuditLog in site âsandboxâ, you may need to change it to your own site. After these steps you are good to go.</p> <p>Have fun!!! :)</p> @@ -266,7 +279,7 @@ After these steps you are good to go.</p> <div class="footnotes"> <ol> <li id="fn:KAFKA"> - <p><em>All mentions of âkafkaâ on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">↩</a></p> + <p><em>All mentions of âkafkaâ on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/quick-start-0.3.0.html ---------------------------------------------------------------------- diff --git a/_site/docs/quick-start-0.3.0.html b/_site/docs/quick-start-0.3.0.html index 4d048fc..1991131 100644 --- a/_site/docs/quick-start-0.3.0.html +++ b/_site/docs/quick-start-0.3.0.html @@ -178,21 +178,22 @@ <li> <p>Build manually with <a href="https://maven.apache.org/">Apache Maven</a>:</p> - <pre><code>$ tar -zxvf apache-eagle-0.3.0-incubating-src.tar.gz + <div class="highlighter-rouge"><pre class="highlight"><code>$ tar -zxvf apache-eagle-0.3.0-incubating-src.tar.gz $ cd incubator-eagle-release-0.3.0-rc3 $ curl -O https://patch-diff.githubusercontent.com/raw/apache/eagle/pull/180.patch $ git apply 180.patch $ mvn clean package -DskipTests </code></pre> + </div> - <p>After building successfully, you will get tarball under <code>eagle-assembly/target/</code> named as <code>eagle-0.3.0-incubating-bin.tar.gz</code> + <p>After building successfully, you will get tarball under <code class="highlighter-rouge">eagle-assembly/target/</code> named as <code class="highlighter-rouge">eagle-0.3.0-incubating-bin.tar.gz</code> <br /></p> </li> </ul> <h3 id="install-eagle"><strong>Install Eagle</strong></h3> -<pre><code> $ scp -P 2222 eagle-assembly/target/eagle-0.3.0-incubating-bin.tar.gz root@127.0.0.1:/root/ +<div class="highlighter-rouge"><pre class="highlight"><code> $ scp -P 2222 eagle-assembly/target/eagle-0.3.0-incubating-bin.tar.gz root@127.0.0.1:/root/ $ ssh root@127.0.0.1 -p 2222 (password is hadoop) $ tar -zxvf eagle-0.3.0-incubating-bin.tar.gz $ mv eagle-0.3.0-incubating eagle @@ -200,6 +201,7 @@ $ mvn clean package -DskipTests $ cd /usr/hdp/current/eagle $ examples/eagle-sandbox-starter.sh </code></pre> +</div> <p><br /></p> @@ -217,7 +219,7 @@ $ mvn clean package -DskipTests <div class="footnotes"> <ol> <li id="fn:HADOOP"> - <p><em>All mentions of âhadoopâ on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhadoopâ on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/quick-start.html ---------------------------------------------------------------------- diff --git a/_site/docs/quick-start.html b/_site/docs/quick-start.html index 6575409..18ae834 100644 --- a/_site/docs/quick-start.html +++ b/_site/docs/quick-start.html @@ -179,21 +179,22 @@ <li> <p>Build manually with <a href="https://maven.apache.org/">Apache Maven</a>:</p> - <pre><code>$ tar -zxvf apache-eagle-0.4.0-incubating-src.tar.gz + <div class="highlighter-rouge"><pre class="highlight"><code>$ tar -zxvf apache-eagle-0.4.0-incubating-src.tar.gz $ cd apache-eagle-0.4.0-incubating-src $ curl -O https://patch-diff.githubusercontent.com/raw/apache/eagle/pull/268.patch $ git apply 268.patch $ mvn clean package -DskipTests </code></pre> + </div> - <p>After building successfully, you will get a tarball under <code>eagle-assembly/target/</code> named <code>apache-eagle-0.4.0-incubating-bin.tar.gz</code> + <p>After building successfully, you will get a tarball under <code class="highlighter-rouge">eagle-assembly/target/</code> named <code class="highlighter-rouge">apache-eagle-0.4.0-incubating-bin.tar.gz</code> <br /></p> </li> </ul> <h3 id="install-eagle"><strong>Install Eagle</strong></h3> -<pre><code> $ scp -P 2222 eagle-assembly/target/apache-eagle-0.4.0-incubating-bin.tar.gz root@127.0.0.1:/root/ +<div class="highlighter-rouge"><pre class="highlight"><code> $ scp -P 2222 eagle-assembly/target/apache-eagle-0.4.0-incubating-bin.tar.gz root@127.0.0.1:/root/ $ ssh root@127.0.0.1 -p 2222 (password is hadoop) $ tar -zxvf apache-eagle-0.4.0-incubating-bin.tar.gz $ mv apache-eagle-0.4.0-incubating eagle @@ -201,20 +202,22 @@ $ mvn clean package -DskipTests $ cd /usr/hdp/current/eagle $ examples/eagle-sandbox-starter.sh </code></pre> +</div> <p><br /></p> <h3 id="sample-application-hive-query-activity-monitoring-in-sandbox"><strong>Sample Application: Hive query activity monitoring in sandbox</strong></h3> -<p>After executing <code>examples/eagle-sandbox-starter.sh</code>, you have a sample application (topology) running on the Apache Storm (check with <a href="http://sandbox.hortonworks.com:8744/index.html">storm ui</a>), and a sample policy of Hive activity monitoring defined.</p> +<p>After executing <code class="highlighter-rouge">examples/eagle-sandbox-starter.sh</code>, you have a sample application (topology) running on the Apache Storm (check with <a href="http://sandbox.hortonworks.com:8744/index.html">storm ui</a>), and a sample policy of Hive activity monitoring defined.</p> <p>Next you can trigger an alert by running a Hive query.</p> -<pre><code>$ su hive +<div class="highlighter-rouge"><pre class="highlight"><code>$ su hive $ hive $ set hive.execution.engine=mr; $ use xademo; $ select a.phone_number from customer_details a, call_detail_records b where a.phone_number=b.phone_number; </code></pre> +</div> <p><br /></p> <hr /> @@ -224,10 +227,10 @@ $ select a.phone_number from customer_details a, call_detail_records b where a.p <div class="footnotes"> <ol> <li id="fn:HADOOP"> - <p><em>Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> + <p><em>Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> </li> <li id="fn:HIVE"> - <p><em>All mentions of âhiveâ on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhiveâ on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/security.html ---------------------------------------------------------------------- diff --git a/_site/docs/security.html b/_site/docs/security.html index 0fc1599..34794a1 100644 --- a/_site/docs/security.html +++ b/_site/docs/security.html @@ -158,7 +158,7 @@ <h1 class="page-header" style="margin-top: 0px">Apache Eagle Security</h1> <p>The Apache Software Foundation takes a very active stance in eliminating security problems in its software products. Apache Eagle is also responsive to such issues around its features.</p> -<p>If you have any concern regarding to Eagleâs Security or you believe a vulnerability is discovered, donât hesitate to get connected with Aapche Security Team by sending emails to <a href="mailto:security@apache.org">security@apache.org</a>. In the message, you can indicate the project name is Eagle, provide a description of the issue, and you are recommended to give the way of reproducing it. The security team and eagle community will get back to you after assessing the findings.</p> +<p>If you have any concern regarding to Eagleâs Security or you believe a vulnerability is discovered, donât hesitate to get connected with Aapche Security Team by sending emails to <a href="mailto:secur...@apache.org">secur...@apache.org</a>. In the message, you can indicate the project name is Eagle, provide a description of the issue, and you are recommended to give the way of reproducing it. The security team and eagle community will get back to you after assessing the findings.</p> <blockquote> <p><strong>PLEASE PAY ATTENTION</strong> to report any security problem to the security email address before disclosing it publicly.</p> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/serviceconfiguration.html ---------------------------------------------------------------------- diff --git a/_site/docs/serviceconfiguration.html b/_site/docs/serviceconfiguration.html index 2519ae8..3ecd741 100644 --- a/_site/docs/serviceconfiguration.html +++ b/_site/docs/serviceconfiguration.html @@ -171,7 +171,7 @@ description of Eagle Service configuration.</p> <li>for hbase</li> </ul> -<pre><code>eagle { +<div class="highlighter-rouge"><pre class="highlight"><code>eagle { service{ storage-type="hbase" hbase-zookeeper-quorum="sandbox.hortonworks.com" @@ -182,12 +182,13 @@ description of Eagle Service configuration.</p> } } </code></pre> +</div> <ul> <li>for mysql</li> </ul> -<pre><code>eagle { +<div class="highlighter-rouge"><pre class="highlight"><code>eagle { service { storage-type="jdbc" storage-adapter="mysql" @@ -201,12 +202,13 @@ description of Eagle Service configuration.</p> } } </code></pre> +</div> <ul> <li>for derby</li> </ul> -<pre><code>eagle { +<div class="highlighter-rouge"><pre class="highlight"><code>eagle { service { storage-type="jdbc" storage-adapter="derby" @@ -220,6 +222,7 @@ description of Eagle Service configuration.</p> } } </code></pre> +</div> <p><br /></p> </div><!--end of loadcontent--> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/terminology.html ---------------------------------------------------------------------- diff --git a/_site/docs/terminology.html b/_site/docs/terminology.html index 1ee3156..27c99ca 100644 --- a/_site/docs/terminology.html +++ b/_site/docs/terminology.html @@ -193,10 +193,10 @@ They are basic knowledge of Eagle which also will help to well understand Eagle. <div class="footnotes"> <ol> <li id="fn:HADOOP"> - <p><em>All mentions of âhadoopâ on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhadoopâ on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> </li> <li id="fn:HIVE"> - <p><em>Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> + <p><em>Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/classification.html ---------------------------------------------------------------------- diff --git a/_site/docs/tutorial/classification.html b/_site/docs/tutorial/classification.html index e79dc14..64e6c49 100644 --- a/_site/docs/tutorial/classification.html +++ b/_site/docs/tutorial/classification.html @@ -182,30 +182,33 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive< <p>You may configure the default path for Apache Hadoop clients to connect remote hdfs namenode.</p> - <pre><code> classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020 </code></pre> + </div> </li> <li> <p>HA case</p> <p>Basically, you point your fs.defaultFS at your nameservice and let the client know how its configured (the backing namenodes) and how to fail over between them under the HA mode</p> - <pre><code> classification.fs.defaultFS=hdfs://nameservice1 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.fs.defaultFS=hdfs://nameservice1 classification.dfs.nameservices=nameservice1 classification.dfs.ha.namenodes.nameservice1=namenode1,namenode2 classification.dfs.namenode.rpc-address.nameservice1.namenode1=hadoopnamenode01:8020 classification.dfs.namenode.rpc-address.nameservice1.namenode2=hadoopnamenode02:8020 classification.dfs.client.failover.proxy.provider.nameservice1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider </code></pre> + </div> </li> <li> <p>Kerberos-secured cluster</p> <p>For Kerberos-secured cluster, you need to get a keytab file and the principal from your admin, and configure âeagle.keytab.fileâ and âeagle.kerberos.principalâ to authenticate its access.</p> - <pre><code> classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab + <div class="highlighter-rouge"><pre class="highlight"><code> classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab classification.eagle.kerberos.principal=ea...@somewhere.com </code></pre> + </div> <p>If there is an exception about âinvalid server principal nameâ, you may need to check the DNS resolver, or the data transfer , such as âdfs.encrypt.data.transferâ, âdfs.encrypt.data.transfer.algorithmâ, âdfs.trustedchannel.resolver.classâ, âdfs.datatransfer.client.encryptâ.</p> </li> @@ -216,12 +219,13 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive< <li> <p>Basic</p> - <pre><code> classification.accessType=metastoredb_jdbc + <div class="highlighter-rouge"><pre class="highlight"><code> classification.accessType=metastoredb_jdbc classification.password=hive classification.user=hive classification.jdbcDriverClassName=com.mysql.jdbc.Driver classification.jdbcUrl=jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true </code></pre> + </div> </li> </ul> </li> @@ -234,16 +238,17 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive< <p>You need to sett âhbase.zookeeper.quorumâ:âlocalhostâ property and âhbase.zookeeper.property.clientPortâ property.</p> - <pre><code> classification.hbase.zookeeper.property.clientPort=2181 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.hbase.zookeeper.property.clientPort=2181 classification.hbase.zookeeper.quorum=localhost </code></pre> + </div> </li> <li> <p>Kerberos-secured cluster</p> <p>According to your environment, you can add or remove some of the following properties. Here is the reference.</p> - <pre><code> classification.hbase.zookeeper.property.clientPort=2181 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.hbase.zookeeper.property.clientPort=2181 classification.hbase.zookeeper.quorum=localhost classification.hbase.security.authentication=kerberos classification.hbase.master.kerberos.principal=hadoop/_h...@example.com @@ -251,6 +256,7 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive< classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab classification.eagle.kerberos.principal=ea...@example.com </code></pre> + </div> </li> </ul> </li> @@ -321,10 +327,10 @@ Currently this feature is available ONLY for applications monitoring HDFS, Hive< <div class="footnotes"> <ol> <li id="fn:HIVE"> - <p><em>All mentions of âhiveâ on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhiveâ on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> </li> <li id="fn:HBASE"> - <p><em>All mentions of âhbaseâ on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhbaseâ on this page represent Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/ldap.html ---------------------------------------------------------------------- diff --git a/_site/docs/tutorial/ldap.html b/_site/docs/tutorial/ldap.html index 7cc814e..918fe8c 100644 --- a/_site/docs/tutorial/ldap.html +++ b/_site/docs/tutorial/ldap.html @@ -160,7 +160,7 @@ <p>Step 1: edit configuration under conf/ldap.properties.</p> -<pre><code>ldap.server=ldap://localhost:10389 +<div class="highlighter-rouge"><pre class="highlight"><code>ldap.server=ldap://localhost:10389 ldap.username=uid=admin,ou=system ldap.password=secret ldap.user.searchBase=ou=Users,o=mojo @@ -169,12 +169,13 @@ ldap.user.groupSearchBase=ou=groups,o=mojo acl.adminRole= acl.defaultRole=ROLE_USER </code></pre> +</div> <p>acl.adminRole and acl.defaultRole are two customized properties for Eagle. Eagle manages admin users with groups. If you set acl.adminRole as ROLE_{EAGLE-ADMIN-GROUP-NAME}, members in this group have the admin privilege. acl.defaultRole is ROLE_USER.</p> <p>Step 2: edit conf/eagle-service.conf, and add springActiveProfile=âdefaultâ</p> -<pre><code>eagle{ +<div class="highlighter-rouge"><pre class="highlight"><code>eagle{ service{ storage-type="hbase" hbase-zookeeper-quorum="localhost" @@ -184,6 +185,7 @@ acl.defaultRole=ROLE_USER } } </code></pre> +</div> </div><!--end of loadcontent--> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/notificationplugin.html ---------------------------------------------------------------------- diff --git a/_site/docs/tutorial/notificationplugin.html b/_site/docs/tutorial/notificationplugin.html index 627b7b1..db26f69 100644 --- a/_site/docs/tutorial/notificationplugin.html +++ b/_site/docs/tutorial/notificationplugin.html @@ -183,12 +183,12 @@ </li> </ul> -<p><img src="/images/notificationPlugin.png" alt="notificationPlugin" /> -### Customized Notification Plugin</p> +<p><img src="/images/notificationPlugin.png" alt="notificationPlugin" /></p> +<h3 id="customized-notification-plugin">Customized Notification Plugin</h3> <p>To integrate a customized notification plugin, we must implement an interface</p> -<pre><code>public interface NotificationPlugin { +<div class="highlighter-rouge"><pre class="highlight"><code>public interface NotificationPlugin { /** * for initialization * @throws Exception @@ -218,24 +218,26 @@ void onAlert(AlertAPIEntity alertEntity) throws Exception; List<NotificationStatus> getStatusList(); } Examples: AlertKafkaPlugin, AlertEmailPlugin, and AlertEagleStorePlugin. </code></pre> +</div> <p>The second and crucial step is to register the configurations of the customized plugin. In other words, we need persist the configuration template into database in order to expose the configurations to users in the front end.</p> <p>Examples:</p> -<pre><code>{ - "prefix": "alertNotifications", - "tags": { - "notificationType": "kafka" - }, - "className": "org.apache.eagle.notification.plugin.AlertKafkaPlugin", - "description": "send alert to kafka bus", - "enabled":true, - "fields": "[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]" -} -</code></pre> +<div class="highlighter-rouge"><pre class="highlight"><code><span class="p">{</span><span class="w"> + </span><span class="nt">"prefix"</span><span class="p">:</span><span class="w"> </span><span class="s2">"alertNotifications"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"tags"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> + </span><span class="nt">"notificationType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"kafka"</span><span class="w"> + </span><span class="p">},</span><span class="w"> + </span><span class="nt">"className"</span><span class="p">:</span><span class="w"> </span><span class="s2">"org.apache.eagle.notification.plugin.AlertKafkaPlugin"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"send alert to kafka bus"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"enabled"</span><span class="p">:</span><span class="kc">true</span><span class="p">,</span><span class="w"> + </span><span class="nt">"fields"</span><span class="p">:</span><span class="w"> </span><span class="s2">"[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]"</span><span class="w"> +</span><span class="p">}</span><span class="w"> +</span></code></pre> +</div> -<p><strong>Note</strong>: <code>fields</code> is the configuration for notification type <code>kafka</code></p> +<p><strong>Note</strong>: <code class="highlighter-rouge">fields</code> is the configuration for notification type <code class="highlighter-rouge">kafka</code></p> <p>How can we do that? <a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/bin/eagle-topology-init.sh">Here</a> are Eagle other notification plugin configurations. Just append yours to it, and run this script when Eagle service is up.</p> @@ -246,7 +248,7 @@ List<NotificationStatus> getStatusList(); <div class="footnotes"> <ol> <li id="fn:KAFKA"> - <p><em>All mentions of âkafkaâ on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">↩</a></p> + <p><em>All mentions of âkafkaâ on this page represent Apache Kafka.</em> <a href="#fnref:KAFKA" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/policy.html ---------------------------------------------------------------------- diff --git a/_site/docs/tutorial/policy.html b/_site/docs/tutorial/policy.html index e732eb0..657cf2d 100644 --- a/_site/docs/tutorial/policy.html +++ b/_site/docs/tutorial/policy.html @@ -183,12 +183,13 @@ <li> <p><strong>Step 2</strong>: Eagle supports a variety of properties for match critera where users can set different values. Eagle also supports window functions to extend policies with time functions.</p> - <pre><code>command = delete + <div class="highlighter-rouge"><pre class="highlight"><code>command = delete (Eagle currently supports the following commands open, delete, copy, append, copy from local, get, move, mkdir, create, list, change permissions) source = /tmp/private (Eagle supports wildcarding for property values for example /tmp/*) </code></pre> + </div> <p><img src="/images/docs/hdfs-policy2.png" alt="HDFS Policies" /></p> </li> @@ -215,12 +216,13 @@ source = /tmp/private <li> <p><strong>Step 2</strong>: Eagle support a variety of properties for match critera where users can set different values. Eagle also supports window functions to extend policies with time functions.</p> - <pre><code>command = Select + <div class="highlighter-rouge"><pre class="highlight"><code>command = Select (Eagle currently supports the following commands DDL statements Create, Drop, Alter, Truncate, Show) sensitivity type = PHONE_NUMBER (Eagle supports classifying data in Hive with different sensitivity types. Users can use these sensitivity types to create policies) </code></pre> + </div> <p><img src="/images/docs/hive-policy2.png" alt="Hive Policies" /></p> </li> @@ -238,7 +240,7 @@ sensitivity type = PHONE_NUMBER <div class="footnotes"> <ol> <li id="fn:HIVE"> - <p><em>All mentions of âhiveâ on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhiveâ on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/site-0.3.0.html ---------------------------------------------------------------------- diff --git a/_site/docs/tutorial/site-0.3.0.html b/_site/docs/tutorial/site-0.3.0.html index a542128..45784d9 100644 --- a/_site/docs/tutorial/site-0.3.0.html +++ b/_site/docs/tutorial/site-0.3.0.html @@ -180,32 +180,35 @@ Here we give configuration examples for HDFS, HBASE, and Hive.</p> <p>You may configure the default path for Hadoop clients to connect remote hdfs namenode.</p> - <pre><code> {"fs.defaultFS":"hdfs://sandbox.hortonworks.com:8020"} -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="nt">"fs.defaultFS"</span><span class="p">:</span><span class="s2">"hdfs://sandbox.hortonworks.com:8020"</span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> <li> <p>HA case</p> <p>Basically, you point your fs.defaultFS at your nameservice and let the client know how its configured (the backing namenodes) and how to fail over between them under the HA mode</p> - <pre><code> {"fs.defaultFS":"hdfs://nameservice1", - "dfs.nameservices": "nameservice1", - "dfs.ha.namenodes.nameservice1":"namenode1,namenode2", - "dfs.namenode.rpc-address.nameservice1.namenode1": "hadoopnamenode01:8020", - "dfs.namenode.rpc-address.nameservice1.namenode2": "hadoopnamenode02:8020", - "dfs.client.failover.proxy.provider.nameservice1": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="nt">"fs.defaultFS"</span><span class="p">:</span><span class="s2">"hdfs://nameservice1"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.nameservices"</span><span class="p">:</span><span class="w"> </span><span class="s2">"nameservice1"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.ha.namenodes.nameservice1"</span><span class="p">:</span><span class="s2">"namenode1,namenode2"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.namenode.rpc-address.nameservice1.namenode1"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hadoopnamenode01:8020"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.namenode.rpc-address.nameservice1.namenode2"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hadoopnamenode02:8020"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.client.failover.proxy.provider.nameservice1"</span><span class="p">:</span><span class="w"> </span><span class="s2">"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> <li> <p>Kerberos-secured cluster</p> <p>For Kerberos-secured cluster, you need to get a keytab file and the principal from your admin, and configure âeagle.keytab.fileâ and âeagle.kerberos.principalâ to authenticate its access.</p> - <pre><code> { "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab", - "eagle.kerberos.principal":"ea...@somewhere.com" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nt">"eagle.keytab.file"</span><span class="p">:</span><span class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"eagle.kerberos.principal"</span><span class="p">:</span><span class="s2">"ea...@somewhere.com"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> <p>If there is an exception about âinvalid server principal nameâ, you may need to check the DNS resolver, or the data transfer , such as âdfs.encrypt.data.transferâ, âdfs.encrypt.data.transfer.algorithmâ, âdfs.trustedchannel.resolver.classâ, âdfs.datatransfer.client.encryptâ.</p> </li> @@ -216,14 +219,15 @@ Here we give configuration examples for HDFS, HBASE, and Hive.</p> <li> <p>Basic</p> - <pre><code> { - "accessType": "metastoredb_jdbc", - "password": "hive", - "user": "hive", - "jdbcDriverClassName": "com.mysql.jdbc.Driver", - "jdbcUrl": "jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="w"> + </span><span class="nt">"accessType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"metastoredb_jdbc"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"password"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"user"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"jdbcDriverClassName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"com.mysql.jdbc.Driver"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"jdbcUrl"</span><span class="p">:</span><span class="w"> </span><span class="s2">"jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> </ul> </li> @@ -236,27 +240,29 @@ Here we give configuration examples for HDFS, HBASE, and Hive.</p> <p>You need to sett âhbase.zookeeper.quorumâ:âlocalhostâ property and âhbase.zookeeper.property.clientPortâ property.</p> - <pre><code> { - "hbase.zookeeper.property.clientPort":"2181", - "hbase.zookeeper.quorum":"localhost" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="w"> + </span><span class="nt">"hbase.zookeeper.property.clientPort"</span><span class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"hbase.zookeeper.quorum"</span><span class="p">:</span><span class="s2">"localhost"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> <li> <p>Kerberos-secured cluster</p> <p>According to your environment, you can add or remove some of the following properties. Here is the reference.</p> - <pre><code> { - "hbase.zookeeper.property.clientPort":"2181", - "hbase.zookeeper.quorum":"localhost", - "hbase.security.authentication":"kerberos", - "hbase.master.kerberos.principal":"hadoop/_h...@example.com", - "zookeeper.znode.parent":"/hbase", - "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab", - "eagle.kerberos.principal":"ea...@example.com" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="w"> + </span><span class="nt">"hbase.zookeeper.property.clientPort"</span><span class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"hbase.zookeeper.quorum"</span><span class="p">:</span><span class="s2">"localhost"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"hbase.security.authentication"</span><span class="p">:</span><span class="s2">"kerberos"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"hbase.master.kerberos.principal"</span><span class="p">:</span><span class="s2">"hadoop/_h...@example.com"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"zookeeper.znode.parent"</span><span class="p">:</span><span class="s2">"/hbase"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"eagle.keytab.file"</span><span class="p">:</span><span class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"eagle.kerberos.principal"</span><span class="p">:</span><span class="s2">"ea...@example.com"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> </ul> </li> @@ -274,13 +280,13 @@ Here we give configuration examples for HDFS, HBASE, and Hive.</p> <div class="footnotes"> <ol> <li id="fn:HADOOP"> - <p><em>All mentions of âhadoopâ on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhadoopâ on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> </li> <li id="fn:HIVE"> - <p><em>Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> + <p><em>Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> </li> <li id="fn:HBASE"> - <p><em>Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">↩</a></p> + <p><em>Apache HBase.</em> <a href="#fnref:HBASE" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/topologymanagement.html ---------------------------------------------------------------------- diff --git a/_site/docs/tutorial/topologymanagement.html b/_site/docs/tutorial/topologymanagement.html index 2a95034..d635def 100644 --- a/_site/docs/tutorial/topologymanagement.html +++ b/_site/docs/tutorial/topologymanagement.html @@ -168,7 +168,7 @@ <p>Application manager consists of a daemon scheduler and an execution module. The scheduler periodically loads user operations(start/stop) from database, and the execution module executes these operations. For more details, please refer to <a href="https://cwiki.apache.org/confluence/display/EAG/Application+Management">here</a>.</p> <h3 id="configurations">Configurations</h3> -<p>The configuration file <code>eagle-scheduler.conf</code> defines scheduler parameters, execution platform settings and parts of default topology configuration.</p> +<p>The configuration file <code class="highlighter-rouge">eagle-scheduler.conf</code> defines scheduler parameters, execution platform settings and parts of default topology configuration.</p> <ul> <li> @@ -262,7 +262,7 @@ <li> <p>Editing eagle-scheduler.conf, and start Eagle service</p> - <pre><code> # enable application manager + <div class="highlighter-rouge"><pre class="highlight"><code> # enable application manager appCommandLoaderEnabled = true # provide jar path @@ -272,9 +272,10 @@ envContextConfig.url = "http://sandbox.hortonworks.com:8744" envContextConfig.nimbusHost = "sandbox.hortonworks.com" </code></pre> + </div> <p>For more configurations, please back to <a href="/docs/configuration.html">Application Configuration</a>. <br /> - After the configuration is ready, start Eagle service <code>bin/eagle-service.sh start</code>.</p> + After the configuration is ready, start Eagle service <code class="highlighter-rouge">bin/eagle-service.sh start</code>.</p> </li> <li> <p>Go to admin page @@ -296,11 +297,11 @@ <li> <p>Go to site page, and add topology configurations.</p> - <p><strong>NOTICE</strong> topology configurations defined here are REQUIRED an extra prefix <code>.app</code></p> + <p><strong>NOTICE</strong> topology configurations defined here are REQUIRED an extra prefix <code class="highlighter-rouge">.app</code></p> <p>Blow are some example configurations for [site=sandbox, applicatoin=hbaseSecurityLog].</p> - <pre><code> classification.hbase.zookeeper.property.clientPort=2181 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.hbase.zookeeper.property.clientPort=2181 classification.hbase.zookeeper.quorum=sandbox.hortonworks.com app.envContextConfig.env=storm @@ -329,6 +330,7 @@ app.eagleProps.eagleService.username=admin app.eagleProps.eagleService.password=secret </code></pre> + </div> <p><img src="/images/appManager/topology-configuration-1.png" alt="topology-configuration-1" /> <img src="/images/appManager/topology-configuration-2.png" alt="topology-configuration-2" /></p> @@ -351,7 +353,7 @@ <div class="footnotes"> <ol> <li id="fn:STORM"> - <p><em>All mentions of âstormâ on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">↩</a></p> + <p><em>All mentions of âstormâ on this page represent Apache Storm.</em> <a href="#fnref:STORM" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/tutorial/userprofile.html ---------------------------------------------------------------------- diff --git a/_site/docs/tutorial/userprofile.html b/_site/docs/tutorial/userprofile.html index fbf55e6..5b442ab 100644 --- a/_site/docs/tutorial/userprofile.html +++ b/_site/docs/tutorial/userprofile.html @@ -173,9 +173,10 @@ is started.</p> <li> <p>Option 1: command line</p> - <pre><code>$ cd <eagle-home>/bin + <div class="highlighter-rouge"><pre class="highlight"><code>$ cd <eagle-home>/bin $ bin/eagle-userprofile-scheduler.sh --site sandbox start </code></pre> + </div> </li> <li> <p>Option 2: start via Apache Ambari @@ -203,8 +204,9 @@ $ bin/eagle-userprofile-scheduler.sh --site sandbox start <p>submit userProfiles topology if itâs not on <a href="http://sandbox.hortonworks.com:8744">topology UI</a></p> - <pre><code>$ bin/eagle-topology.sh --main org.apache.eagle.security.userprofile.UserProfileDetectionMain --config conf/sandbox-userprofile-topology.conf start + <div class="highlighter-rouge"><pre class="highlight"><code>$ bin/eagle-topology.sh --main org.apache.eagle.security.userprofile.UserProfileDetectionMain --config conf/sandbox-userprofile-topology.conf start </code></pre> + </div> </li> <li> <p><strong>Option 2</strong>: Apache Ambari</p> @@ -219,23 +221,26 @@ $ bin/eagle-userprofile-scheduler.sh --site sandbox start <li>Prepare sample data for ML training and validation sample data <ul> <li>a. Download following sample data to be used for training</li> - <li><a href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code>user1.hdfs-audit.2015-10-11-00.txt</code></a></li> - <li><a href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code>user1.hdfs-audit.2015-10-11-01.txt</code></a></li> - <li>b. Downlaod <a href="/data/userprofile-validate.txt"><code>userprofile-validate.txt</code></a>file which contains data points that you can try to test the models</li> + </ul> + <ul> + <li><a href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code class="highlighter-rouge">user1.hdfs-audit.2015-10-11-00.txt</code></a></li> + <li><a href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code class="highlighter-rouge">user1.hdfs-audit.2015-10-11-01.txt</code></a> + * b. Downlaod <a href="/data/userprofile-validate.txt"><code class="highlighter-rouge">userprofile-validate.txt</code></a>file which contains data points that you can try to test the models</li> </ul> </li> <li>Copy the files (downloaded in the previous step) into a location in sandbox -For example: <code>/usr/hdp/current/eagle/lib/userprofile/data/</code></li> - <li>Modify <code><Eagle-home>/conf/sandbox-userprofile-scheduler.conf </code> -update <code>training-audit-path</code> to set to the path for training data sample (the path you used for Step 1.a) +For example: <code class="highlighter-rouge">/usr/hdp/current/eagle/lib/userprofile/data/</code></li> + <li>Modify <code class="highlighter-rouge"><Eagle-home>/conf/sandbox-userprofile-scheduler.conf </code> +update <code class="highlighter-rouge">training-audit-path</code> to set to the path for training data sample (the path you used for Step 1.a) update detection-audit-path to set to the path for validation (the path you used for Step 1.b)</li> <li>Run ML training program from eagle UI</li> <li> <p>Produce Apache Kafka data using the contents from validate file (Step 1.b) -Run the command (assuming the eagle configuration uses Kafka topic <code>sandbox_hdfs_audit_log</code>)</p> +Run the command (assuming the eagle configuration uses Kafka topic <code class="highlighter-rouge">sandbox_hdfs_audit_log</code>)</p> - <pre><code> ./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic sandbox_hdfs_audit_log + <div class="highlighter-rouge"><pre class="highlight"><code> ./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic sandbox_hdfs_audit_log </code></pre> + </div> </li> <li>Paste few lines of data from file validate onto kafka-console-producer Check <a href="http://localhost:9099/eagle-service/#/dam/alertList">http://localhost:9099/eagle-service/#/dam/alertList</a> for generated alerts</li> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/docs/usecases.html ---------------------------------------------------------------------- diff --git a/_site/docs/usecases.html b/_site/docs/usecases.html index 056c0ce..ebb0a23 100644 --- a/_site/docs/usecases.html +++ b/_site/docs/usecases.html @@ -218,16 +218,16 @@ <div class="footnotes"> <ol> <li id="fn:HADOOP"> - <p><em>All mentions of âhadoopâ on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhadoopâ on this page represent Apache Hadoop.</em> <a href="#fnref:HADOOP" class="reversefootnote">↩</a></p> </li> <li id="fn:HIVE"> - <p><em>All mentions of âhiveâ on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> + <p><em>All mentions of âhiveâ on this page represent Apache Hive.</em> <a href="#fnref:HIVE" class="reversefootnote">↩</a></p> </li> <li id="fn:SPARK"> - <p><em>All mentions of âsparkâ on this page represent Apache Spark.</em> <a href="#fnref:SPARK" class="reversefootnote">↩</a></p> + <p><em>All mentions of âsparkâ on this page represent Apache Spark.</em> <a href="#fnref:SPARK" class="reversefootnote">↩</a></p> </li> <li id="fn:CASSANDRA"> - <p><em>Apache Cassandra.</em> <a href="#fnref:CASSANDRA" class="reversefootnote">↩</a></p> + <p><em>Apache Cassandra.</em> <a href="#fnref:CASSANDRA" class="reversefootnote">↩</a></p> </li> </ol> </div> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/feed.xml ---------------------------------------------------------------------- diff --git a/_site/feed.xml b/_site/feed.xml index bd59283..a2720f7 100644 --- a/_site/feed.xml +++ b/_site/feed.xml @@ -5,9 +5,9 @@ <description>Eagle - Analyze Big Data Platforms for Security and Performance</description> <link>http://goeagle.io/</link> <atom:link href="http://goeagle.io/feed.xml" rel="self" type="application/rss+xml"/> - <pubDate>Mon, 03 Apr 2017 19:55:40 +0800</pubDate> - <lastBuildDate>Mon, 03 Apr 2017 19:55:40 +0800</lastBuildDate> - <generator>Jekyll v2.5.3</generator> + <pubDate>Wed, 22 Nov 2017 13:52:35 +0800</pubDate> + <lastBuildDate>Wed, 22 Nov 2017 13:52:35 +0800</lastBuildDate> + <generator>Jekyll v3.4.3</generator> <item> <title>Apache Eagle æ£å¼åå¸ï¼åå¸å¼å®æ¶Hadoopæ°æ®å®å ¨æ¹æ¡</title> @@ -17,7 +17,7 @@ <p>æ¥åï¼eBayå ¬å¸éé宣å¸æ£å¼åå¼æºä¸çæ¨åºåå¸å¼å®æ¶å®å ¨çæ§æ¹æ¡ ï¼ Apache Eagle (http://goeagle.io)ï¼è¯¥é¡¹ç®å·²äº2015å¹´10æ26æ¥æ£å¼å å ¥Apache æ为åµåå¨é¡¹ç®ãApache Eagleæä¾ä¸å¥é«æåå¸å¼çæµå¼çç¥å¼æï¼å ·æé«å®æ¶ãå¯ä¼¸ç¼©ãææ©å±ã交äºå好çç¹ç¹ï¼åæ¶éææºå¨å¦ä¹ 对ç¨æ·è¡ä¸ºå»ºç«Profile以å®ç°æºè½å®æ¶å°ä¿æ¤Hadoopçæç³»ç»ä¸å¤§æ°æ®çå®å ¨ã</p> -<h2 id="section">èæ¯</h2> +<h2 id="èæ¯">èæ¯</h2> <p>éç大æ°æ®çåå±ï¼è¶æ¥è¶å¤çæåä¼ä¸æè ç»ç»å¼å§éåæ°æ®é©±å¨åä¸çè¿ä½æ¨¡å¼ãå¨eBayï¼æ们æ¥ææ°ä¸åå·¥ç¨å¸ãåæå¸åæ°æ®ç§å¦å®¶ï¼ä»ä»¬æ¯å¤©è®¿é®åææ°PB级çæ°æ®ï¼ä»¥ä¸ºæ们çç¨æ·å¸¦æ¥æ ä¸ä¼¦æ¯çä½éªãå¨å ¨çä¸å¡ä¸ï¼æ们ä¹å¹¿æ³å°å©ç¨æµ·é大æ°æ®æ¥è¿æ¥æ们æ°ä»¥äº¿è®¡çç¨æ·ã</p> <p>è¿å¹´æ¥ï¼Hadoopå·²ç»éæ¸æ为大æ°æ®åæé¢åæå欢è¿ç解å³æ¹æ¡ï¼eBayä¹ä¸ç´å¨ä½¿ç¨Hadoopææ¯ä»æ°æ®ä¸ææä»·å¼ï¼ä¾å¦ï¼æ们éè¿å¤§æ°æ®æé«ç¨æ·çæç´¢ä½éªï¼è¯å«åä¼åç²¾å广åææ¾ï¼å å®æ们ç产åç®å½ï¼ä»¥åéè¿ç¹å»æµåæ以ç解ç¨æ·å¦ä½ä½¿ç¨æ们çå¨çº¿å¸åºå¹³å°çã</p> @@ -54,20 +54,20 @@ <li><strong>å¼æº</strong>ï¼Eagleä¸ç´æ ¹æ®å¼æºçæ åå¼åï¼å¹¶æ建äºè¯¸å¤å¤§æ°æ®é¢åçå¼æºäº§åä¹ä¸ï¼å æ¤æ们å³å®ä»¥Apache许å¯è¯å¼æºEagleï¼ä»¥åé¦ç¤¾åºï¼åæ¶ä¹æå¾ è·å¾ç¤¾åºçåé¦ãåä½ä¸æ¯æã</li> </ul> -<h2 id="eagle">Eagleæ¦è§</h2> +<h2 id="eagleæ¦è§">Eagleæ¦è§</h2> <p><img src="/images/posts/eagle-group.png" alt="" /></p> -<h4 id="data-collection-and-storage">æ°æ®æµæ¥å ¥ååå¨ï¼Data Collection and Storageï¼</h4> +<h4 id="æ°æ®æµæ¥å ¥ååå¨data-collection-and-storage">æ°æ®æµæ¥å ¥ååå¨ï¼Data Collection and Storageï¼</h4> <p>Eagleæä¾é«åº¦å¯æ©å±çç¼ç¨APIï¼å¯ä»¥æ¯æå°ä»»ä½ç±»åçæ°æ®æºéæå°Eagleççç¥æ§è¡å¼æä¸ãä¾å¦ï¼å¨Eagle HDFS 审计äºä»¶ï¼Auditï¼çæ§æ¨¡åä¸ï¼éè¿Kafkaæ¥å®æ¶æ¥æ¶æ¥èªNamenode Log4j Appender æè Logstash Agent æ¶éçæ°æ®ï¼å¨Eagle Hive çæ§æ¨¡åä¸ï¼éè¿YARN API æ¶éæ£å¨è¿è¡JobçHive æ¥è¯¢æ¥å¿ï¼å¹¶ä¿è¯æ¯è¾é«çå¯ä¼¸ç¼©æ§å容éæ§ã</p> -<h4 id="data-processing">æ°æ®å®æ¶å¤çï¼Data Processingï¼</h4> +<h4 id="æ°æ®å®æ¶å¤çdata-processing">æ°æ®å®æ¶å¤çï¼Data Processingï¼</h4> <p><strong>æµå¤çAPIï¼Stream Processing APIï¼Eagle</strong> æä¾ç¬ç«äºç©çå¹³å°èé«åº¦æ½è±¡çæµå¤çAPIï¼ç®åé»è®¤æ¯æApache Stormï¼ä½æ¯ä¹å 许æ©å±å°å ¶ä»ä»»ææµå¤çå¼æï¼æ¯å¦Flink æè Samzaçã该å±æ½è±¡å 许å¼åè å¨å®ä¹çæ§æ°æ®å¤çé»è¾æ¶ï¼æ éå¨ç©çæ§è¡å±ç»å®ä»»ä½ç¹å®æµå¤çå¹³å°ï¼èåªééè¿å¤ç¨ãæ¼æ¥åç»è£ ä¾å¦æ°æ®è½¬æ¢ãè¿æ»¤ãå¤é¨æ°æ®Joinçç»ä»¶ï¼ä»¥å®ç°æ»¡è¶³éæ±çDAGï¼æåæ ç¯å¾ï¼ï¼åæ¶ï¼å¼åè ä¹å¯ä»¥å¾å®¹æå°ä»¥ç¼ç¨å°æ¹å¼å°ä¸å¡é»è¾æµç¨åEagle çç¥å¼ææ¡æ¶éæèµ·æ¥ãEagleæ¡æ¶å é¨ä¼å°æè¿°ä¸å¡é»è¾çDAGç¼è¯æåºå±æµå¤çæ¶æçåçåºç¨ï¼ä¾å¦Apache Storm Topology çï¼ä»äºå®ç°å¹³å°çç¬ç«ã</p> <p><strong>以ä¸æ¯ä¸ä¸ªEagleå¦ä½å¤çäºä»¶ååè¦ç示ä¾ï¼</strong></p> -<pre><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env +<div class="highlighter-rouge"><pre class="highlight"><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare kafka source .flatMap(new AuditLogTransformer()) // transform event .groupBy(Arrays.asList(0)) // group by 1st field @@ -75,6 +75,7 @@ StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout .alertWithConsumer(âuserActivityâ,âuserProfileExecutorâ) // ML policy evaluation env.execute(); // execute stream processing and alert </code></pre> +</div> <p><strong>åè¦æ¡æ¶ï¼Alerting Frameworkï¼Eagle</strong>åè¦æ¡æ¶ç±æµå æ°æ®APIãçç¥å¼ææå¡æä¾APIãçç¥Partitioner API 以åé¢è¦å»éæ¡æ¶çç»æ:</p> @@ -84,7 +85,7 @@ env.execute(); // execute stream processing and alert <li> <p><strong>æ©å±æ§</strong> Eagleççç¥å¼ææå¡æä¾APIå è®¸ä½ æå ¥æ°ççç¥å¼æ</p> - <pre><code> public interface PolicyEvaluatorServiceProvider { + <div class="highlighter-rouge"><pre class="highlight"><code> public interface PolicyEvaluatorServiceProvider { public String getPolicyType(); // literal string to identify one type of policy public Class&lt;? extends PolicyEvaluator&gt; getPolicyEvaluator(); // get policy evaluator implementation public List&lt;Module&gt; getBindingModules(); // policy text with json format to object mapping @@ -95,15 +96,17 @@ env.execute(); // execute stream processing and alert public void onPolicyDelete(); // invoked when policy is deleted } </code></pre> + </div> </li> <li><strong>çç¥Partitioner API</strong> å 许çç¥å¨ä¸åçç©çèç¹ä¸å¹¶è¡æ§è¡ãä¹å è®¸ä½ èªå®ä¹çç¥Partitionerç±»ãè¿äºåè½ä½¿å¾çç¥åäºä»¶å®å ¨ä»¥åå¸å¼çæ¹å¼æ§è¡ã</li> <li> <p><strong>å¯ä¼¸ç¼©æ§</strong> Eagle éè¿æ¯æçç¥çååºæ¥å£æ¥å®ç°å¤§éççç¥å¯ä¼¸ç¼©å¹¶åå°è¿è¡</p> - <pre><code> public interface PolicyPartitioner extends Serializable { + <div class="highlighter-rouge"><pre class="highlight"><code> public interface PolicyPartitioner extends Serializable { int partition(int numTotalPartitions, String policyType, String policyId); // method to distribute policies } </code></pre> + </div> <p><img src="/images/posts/policy-partition.png" alt="" /></p> @@ -160,26 +163,29 @@ Eagle æ¯ææ ¹æ®ç¨æ·å¨Hadoopå¹³å°ä¸åå²ä½¿ç¨è¡ä¸ºä¹ æ¯æ¥å®ä¹è¡ <li> <p>åä¸äºä»¶æ§è¡çç¥ï¼ç¨æ·è®¿é®Hiveä¸çæææ°æ®åï¼</p> - <pre><code> from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream; + <div class="highlighter-rouge"><pre class="highlight"><code> from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream; </code></pre> + </div> </li> <li> <p>åºäºçªå£ççç¥ï¼ç¨æ·å¨10åéå 访é®ç®å½ /tmp/private å¤ä½ 5次ï¼</p> - <pre><code> hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into outputStream; + <div class="highlighter-rouge"><pre class="highlight"><code> hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into outputStream; </code></pre> + </div> </li> </ul> <p><strong>æ¥è¯¢æå¡ï¼Query Serviceï¼</strong> Eagle æä¾ç±»SQLçREST APIç¨æ¥å®ç°é对海éæ°æ®éç综å计ç®ãæ¥è¯¢ååæçè½åï¼æ¯æä¾å¦è¿æ»¤ãèåãç´æ¹è¿ç®ãæåºãtopãç®æ¯è¡¨è¾¾å¼ä»¥åå页çãEagleä¼å æ¯æHBase ä½ä¸ºå ¶é»è®¤æ°æ®åå¨ï¼ä½æ¯åæ¶ä¹æ¯æåºJDBCçå ³ç³»åæ°æ®åºãç¹å«æ¯å½éæ©ä»¥HBaseä½ä¸ºåå¨æ¶ï¼Eagle便åçæ¥æäºHBaseåå¨åæ¥è¯¢æµ·éçæ§æ°æ®çè½åï¼Eagle æ¥è¯¢æ¡æ¶ä¼å°ç¨æ·æä¾çç±»SQLæ¥è¯¢è¯æ³æç»ç¼è¯æ为HBase åççFilter 对象ï¼å¹¶æ¯æéè¿HBase Coprocessorè¿ä¸æ¥æåååºé度ã</p> -<pre><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000 +<div class="highlighter-rouge"><pre class="highlight"><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000 </code></pre> +</div> -<h2 id="eagleebay">Eagleå¨eBayç使ç¨åºæ¯</h2> +<h2 id="eagleå¨ebayç使ç¨åºæ¯">Eagleå¨eBayç使ç¨åºæ¯</h2> <p>ç®åï¼Eagleçæ°æ®è¡ä¸ºçæ§ç³»ç»å·²ç»é¨ç½²å°ä¸ä¸ªæ¥æ2500å¤ä¸ªèç¹çHadoopé群ä¹ä¸ï¼ç¨ä»¥ä¿æ¤æ°ç¾PBæ°æ®çå®å ¨ï¼å¹¶æ£è®¡åäºä»å¹´å¹´åºä¹åæ©å±å°å ¶ä»ä¸å个Hadoopé群ä¸ï¼ä»èè¦çeBay ææ主è¦Hadoopç10000å¤å°èç¹ãå¨æ们çç产ç¯å¢ä¸ï¼æ们已é对HDFSãHive çé群ä¸çæ°æ®é ç½®äºä¸äºåºç¡çå®å ¨çç¥ï¼å¹¶å°äºå¹´åºä¹åä¸æå¼å ¥æ´å¤ççç¥ï¼ä»¥ç¡®ä¿éè¦æ°æ®çç»å¯¹å®å ¨ãç®åï¼Eagleççç¥æ¶µçå¤ç§æ¨¡å¼ï¼å æ¬ä»è®¿é®æ¨¡å¼ãé¢ç¹è®¿é®æ°æ®éï¼é¢å®ä¹æ¥è¯¢ç±»åãHive 表ååãHBase 表以ååºäºæºå¨å¦ä¹ 模åçæçç¨æ·Profileç¸å ³çææçç¥çãåæ¶ï¼æ们ä¹æ广æ³ççç¥æ¥é²æ¢æ°æ®ç丢失ãæ°æ®è¢«æ·è´å°ä¸å®å ¨å°ç¹ãæææ°æ®è¢«æªææåºå访é®çãEagleçç¥å®ä¹ä¸æ大ççµæ´»æ§åæ©å±æ§ä½¿å¾æ们æªæ¥å¯ä»¥è½»æå°ç»§ç»æ©å±æ´å¤æ´å¤æççç¥ä»¥æ¯ææ´å¤å¤å åçç¨ä¾åºæ¯ã</p> -<h2 id="section-1">åç»è®¡å</h2> +<h2 id="åç»è®¡å">åç»è®¡å</h2> <p>è¿å»ä¸¤å¹´ä¸ï¼å¨eBay é¤äºè¢«ç¨äºæ°æ®è¡ä¸ºçæ§ä»¥å¤ï¼Eagle æ ¸å¿æ¡æ¶è¿è¢«å¹¿æ³ç¨äºçæ§èç¹å¥åº·ç¶åµãHadoopåºç¨æ§è½ææ ãHadoop æ ¸å¿æå¡ä»¥åæ´ä¸ªHadoopé群çå¥åº·ç¶åµç诸å¤é¢åãæ们è¿å»ºç«ä¸ç³»åçèªå¨åæºå¶ï¼ä¾å¦èç¹ä¿®å¤çï¼å¸®å©æ们平å°é¨é¨æ大å¾èçäºæ们人工å³åï¼å¹¶ææå°æåäºæ´ä¸ªé群èµæºå°å©ç¨çã</p> <p>以ä¸æ¯æ们ç®åæ£å¨å¼åä¸å°ä¸äºç¹æ§ï¼</p> @@ -196,7 +202,7 @@ Eagle æ¯ææ ¹æ®ç¨æ·å¨Hadoopå¹³å°ä¸åå²ä½¿ç¨è¡ä¸ºä¹ æ¯æ¥å®ä¹è¡ </li> </ul> -<h2 id="section-2">å ³äºä½è </h2> +<h2 id="å ³äºä½è ">å ³äºä½è </h2> <p><a href="https://github.com/haoch">é浩</a>ï¼Apache Eagle Committer å PMC æåï¼eBay åæå¹³å°åºç¡æ¶æé¨é¨é«çº§è½¯ä»¶å·¥ç¨å¸ï¼è´è´£Eagleç产å设计ãææ¯æ¶æãæ ¸å¿å®ç°ä»¥åå¼æºç¤¾åºæ¨å¹¿çã</p> <p>æ谢以ä¸æ¥èªApache Eagle社åºåeBayå ¬å¸çèåä½è 们对æ¬æçè´¡ç®ï¼</p> @@ -210,7 +216,7 @@ Eagle æ¯ææ ¹æ®ç¨æ·å¨Hadoopå¹³å°ä¸åå²ä½¿ç¨è¡ä¸ºä¹ æ¯æ¥å®ä¹è¡ <p>eBay åæå¹³å°åºç¡æ¶æé¨ï¼Analytics Data Infrastructureï¼æ¯eBayçå ¨çæ°æ®ååæåºç¡æ¶æé¨é¨ï¼è´è´£eBayå¨æ°æ®åºãæ°æ®ä»åºãHadoopãåå¡æºè½ä»¥åæºå¨å¦ä¹ çå个æ°æ®å¹³å°å¼åã管çç,æ¯æeBayå ¨çåé¨é¨è¿ç¨é«ç«¯çæ°æ®åæ解å³æ¹æ¡ä½åºåæ¶ææçä½ä¸å³çï¼ä¸ºéå¸å ¨ççä¸å¡ç¨æ·æä¾æ°æ®åæ解å³æ¹æ¡ã</p> -<h2 id="section-3">åèèµæ</h2> +<h2 id="åèèµæ">åèèµæ</h2> <ul> <li>Apache Eagle ææ¡£ï¼<a href="http://goeagle.io">http://goeagle.io</a></li> @@ -218,7 +224,7 @@ Eagle æ¯ææ ¹æ®ç¨æ·å¨Hadoopå¹³å°ä¸åå²ä½¿ç¨è¡ä¸ºä¹ æ¯æ¥å®ä¹è¡ <li>Apache Eagle 项ç®ï¼<a href="http://incubator.apache.org/projects/eagle.html">http://incubator.apache.org/projects/eagle.html</a></li> </ul> -<h2 id="section-4">å¼ç¨é¾æ¥</h2> +<h2 id="å¼ç¨é¾æ¥">å¼ç¨é¾æ¥</h2> <ul> <li><strong>CSDN</strong>: <a href="http://www.csdn.net/article/2015-10-29/2826076">http://www.csdn.net/article/2015-10-29/2826076</a></li> <li><strong>OSCHINA</strong>: <a href="http://www.oschina.net/news/67515/apache-eagle">http://www.oschina.net/news/67515/apache-eagle</a></li> http://git-wip-us.apache.org/repos/asf/eagle/blob/0094010b/_site/post/2015/10/27/apache-eagle-announce-cn.html ---------------------------------------------------------------------- diff --git a/_site/post/2015/10/27/apache-eagle-announce-cn.html b/_site/post/2015/10/27/apache-eagle-announce-cn.html index 5d87315..088e22d 100644 --- a/_site/post/2015/10/27/apache-eagle-announce-cn.html +++ b/_site/post/2015/10/27/apache-eagle-announce-cn.html @@ -93,7 +93,7 @@ <p>æ¥åï¼eBayå ¬å¸éé宣å¸æ£å¼åå¼æºä¸çæ¨åºåå¸å¼å®æ¶å®å ¨çæ§æ¹æ¡ ï¼ Apache Eagle (http://goeagle.io)ï¼è¯¥é¡¹ç®å·²äº2015å¹´10æ26æ¥æ£å¼å å ¥Apache æ为åµåå¨é¡¹ç®ãApache Eagleæä¾ä¸å¥é«æåå¸å¼çæµå¼çç¥å¼æï¼å ·æé«å®æ¶ãå¯ä¼¸ç¼©ãææ©å±ã交äºå好çç¹ç¹ï¼åæ¶éææºå¨å¦ä¹ 对ç¨æ·è¡ä¸ºå»ºç«Profile以å®ç°æºè½å®æ¶å°ä¿æ¤Hadoopçæç³»ç»ä¸å¤§æ°æ®çå®å ¨ã</p> -<h2 id="section">èæ¯</h2> +<h2 id="èæ¯">èæ¯</h2> <p>éç大æ°æ®çåå±ï¼è¶æ¥è¶å¤çæåä¼ä¸æè ç»ç»å¼å§éåæ°æ®é©±å¨åä¸çè¿ä½æ¨¡å¼ãå¨eBayï¼æ们æ¥ææ°ä¸åå·¥ç¨å¸ãåæå¸åæ°æ®ç§å¦å®¶ï¼ä»ä»¬æ¯å¤©è®¿é®åææ°PB级çæ°æ®ï¼ä»¥ä¸ºæ们çç¨æ·å¸¦æ¥æ ä¸ä¼¦æ¯çä½éªãå¨å ¨çä¸å¡ä¸ï¼æ们ä¹å¹¿æ³å°å©ç¨æµ·é大æ°æ®æ¥è¿æ¥æ们æ°ä»¥äº¿è®¡çç¨æ·ã</p> <p>è¿å¹´æ¥ï¼Hadoopå·²ç»éæ¸æ为大æ°æ®åæé¢åæå欢è¿ç解å³æ¹æ¡ï¼eBayä¹ä¸ç´å¨ä½¿ç¨Hadoopææ¯ä»æ°æ®ä¸ææä»·å¼ï¼ä¾å¦ï¼æ们éè¿å¤§æ°æ®æé«ç¨æ·çæç´¢ä½éªï¼è¯å«åä¼åç²¾å广åææ¾ï¼å å®æ们ç产åç®å½ï¼ä»¥åéè¿ç¹å»æµåæ以ç解ç¨æ·å¦ä½ä½¿ç¨æ们çå¨çº¿å¸åºå¹³å°çã</p> @@ -130,20 +130,20 @@ <li><strong>å¼æº</strong>ï¼Eagleä¸ç´æ ¹æ®å¼æºçæ åå¼åï¼å¹¶æ建äºè¯¸å¤å¤§æ°æ®é¢åçå¼æºäº§åä¹ä¸ï¼å æ¤æ们å³å®ä»¥Apache许å¯è¯å¼æºEagleï¼ä»¥åé¦ç¤¾åºï¼åæ¶ä¹æå¾ è·å¾ç¤¾åºçåé¦ãåä½ä¸æ¯æã</li> </ul> -<h2 id="eagle">Eagleæ¦è§</h2> +<h2 id="eagleæ¦è§">Eagleæ¦è§</h2> <p><img src="/images/posts/eagle-group.png" alt="" /></p> -<h4 id="data-collection-and-storage">æ°æ®æµæ¥å ¥ååå¨ï¼Data Collection and Storageï¼</h4> +<h4 id="æ°æ®æµæ¥å ¥ååå¨data-collection-and-storage">æ°æ®æµæ¥å ¥ååå¨ï¼Data Collection and Storageï¼</h4> <p>Eagleæä¾é«åº¦å¯æ©å±çç¼ç¨APIï¼å¯ä»¥æ¯æå°ä»»ä½ç±»åçæ°æ®æºéæå°Eagleççç¥æ§è¡å¼æä¸ãä¾å¦ï¼å¨Eagle HDFS 审计äºä»¶ï¼Auditï¼çæ§æ¨¡åä¸ï¼éè¿Kafkaæ¥å®æ¶æ¥æ¶æ¥èªNamenode Log4j Appender æè Logstash Agent æ¶éçæ°æ®ï¼å¨Eagle Hive çæ§æ¨¡åä¸ï¼éè¿YARN API æ¶éæ£å¨è¿è¡JobçHive æ¥è¯¢æ¥å¿ï¼å¹¶ä¿è¯æ¯è¾é«çå¯ä¼¸ç¼©æ§å容éæ§ã</p> -<h4 id="data-processing">æ°æ®å®æ¶å¤çï¼Data Processingï¼</h4> +<h4 id="æ°æ®å®æ¶å¤çdata-processing">æ°æ®å®æ¶å¤çï¼Data Processingï¼</h4> <p><strong>æµå¤çAPIï¼Stream Processing APIï¼Eagle</strong> æä¾ç¬ç«äºç©çå¹³å°èé«åº¦æ½è±¡çæµå¤çAPIï¼ç®åé»è®¤æ¯æApache Stormï¼ä½æ¯ä¹å 许æ©å±å°å ¶ä»ä»»ææµå¤çå¼æï¼æ¯å¦Flink æè Samzaçã该å±æ½è±¡å 许å¼åè å¨å®ä¹çæ§æ°æ®å¤çé»è¾æ¶ï¼æ éå¨ç©çæ§è¡å±ç»å®ä»»ä½ç¹å®æµå¤çå¹³å°ï¼èåªééè¿å¤ç¨ãæ¼æ¥åç»è£ ä¾å¦æ°æ®è½¬æ¢ãè¿æ»¤ãå¤é¨æ°æ®Joinçç»ä»¶ï¼ä»¥å®ç°æ»¡è¶³éæ±çDAGï¼æåæ ç¯å¾ï¼ï¼åæ¶ï¼å¼åè ä¹å¯ä»¥å¾å®¹æå°ä»¥ç¼ç¨å°æ¹å¼å°ä¸å¡é»è¾æµç¨åEagle çç¥å¼ææ¡æ¶éæèµ·æ¥ãEagleæ¡æ¶å é¨ä¼å°æè¿°ä¸å¡é»è¾çDAGç¼è¯æåºå±æµå¤çæ¶æçåçåºç¨ï¼ä¾å¦Apache Storm Topology çï¼ä»äºå®ç°å¹³å°çç¬ç«ã</p> <p><strong>以ä¸æ¯ä¸ä¸ªEagleå¦ä½å¤çäºä»¶ååè¦ç示ä¾ï¼</strong></p> -<pre><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env +<div class="highlighter-rouge"><pre class="highlight"><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare kafka source .flatMap(new AuditLogTransformer()) // transform event .groupBy(Arrays.asList(0)) // group by 1st field @@ -151,6 +151,7 @@ StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout .alertWithConsumer(âuserActivityâ,âuserProfileExecutorâ) // ML policy evaluation env.execute(); // execute stream processing and alert </code></pre> +</div> <p><strong>åè¦æ¡æ¶ï¼Alerting Frameworkï¼Eagle</strong>åè¦æ¡æ¶ç±æµå æ°æ®APIãçç¥å¼ææå¡æä¾APIãçç¥Partitioner API 以åé¢è¦å»éæ¡æ¶çç»æ:</p> @@ -160,7 +161,7 @@ env.execute(); // execute stream processing and alert <li> <p><strong>æ©å±æ§</strong> Eagleççç¥å¼ææå¡æä¾APIå è®¸ä½ æå ¥æ°ççç¥å¼æ</p> - <pre><code> public interface PolicyEvaluatorServiceProvider { + <div class="highlighter-rouge"><pre class="highlight"><code> public interface PolicyEvaluatorServiceProvider { public String getPolicyType(); // literal string to identify one type of policy public Class<? extends PolicyEvaluator> getPolicyEvaluator(); // get policy evaluator implementation public List<Module> getBindingModules(); // policy text with json format to object mapping @@ -171,15 +172,17 @@ env.execute(); // execute stream processing and alert public void onPolicyDelete(); // invoked when policy is deleted } </code></pre> + </div> </li> <li><strong>çç¥Partitioner API</strong> å 许çç¥å¨ä¸åçç©çèç¹ä¸å¹¶è¡æ§è¡ãä¹å è®¸ä½ èªå®ä¹çç¥Partitionerç±»ãè¿äºåè½ä½¿å¾çç¥åäºä»¶å®å ¨ä»¥åå¸å¼çæ¹å¼æ§è¡ã</li> <li> <p><strong>å¯ä¼¸ç¼©æ§</strong> Eagle éè¿æ¯æçç¥çååºæ¥å£æ¥å®ç°å¤§éççç¥å¯ä¼¸ç¼©å¹¶åå°è¿è¡</p> - <pre><code> public interface PolicyPartitioner extends Serializable { + <div class="highlighter-rouge"><pre class="highlight"><code> public interface PolicyPartitioner extends Serializable { int partition(int numTotalPartitions, String policyType, String policyId); // method to distribute policies } </code></pre> + </div> <p><img src="/images/posts/policy-partition.png" alt="" /></p> @@ -236,26 +239,29 @@ Eagle æ¯ææ ¹æ®ç¨æ·å¨Hadoopå¹³å°ä¸åå²ä½¿ç¨è¡ä¸ºä¹ æ¯æ¥å®ä¹è¡ <li> <p>åä¸äºä»¶æ§è¡çç¥ï¼ç¨æ·è®¿é®Hiveä¸çæææ°æ®åï¼</p> - <pre><code> from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream; + <div class="highlighter-rouge"><pre class="highlight"><code> from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream; </code></pre> + </div> </li> <li> <p>åºäºçªå£ççç¥ï¼ç¨æ·å¨10åéå 访é®ç®å½ /tmp/private å¤ä½ 5次ï¼</p> - <pre><code> hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue >= 5 insert into outputStream; + <div class="highlighter-rouge"><pre class="highlight"><code> hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue >= 5 insert into outputStream; </code></pre> + </div> </li> </ul> <p><strong>æ¥è¯¢æå¡ï¼Query Serviceï¼</strong> Eagle æä¾ç±»SQLçREST APIç¨æ¥å®ç°é对海éæ°æ®éç综å计ç®ãæ¥è¯¢ååæçè½åï¼æ¯æä¾å¦è¿æ»¤ãèåãç´æ¹è¿ç®ãæåºãtopãç®æ¯è¡¨è¾¾å¼ä»¥åå页çãEagleä¼å æ¯æHBase ä½ä¸ºå ¶é»è®¤æ°æ®åå¨ï¼ä½æ¯åæ¶ä¹æ¯æåºJDBCçå ³ç³»åæ°æ®åºãç¹å«æ¯å½éæ©ä»¥HBaseä½ä¸ºåå¨æ¶ï¼Eagle便åçæ¥æäºHBaseåå¨åæ¥è¯¢æµ·éçæ§æ°æ®çè½åï¼Eagle æ¥è¯¢æ¡æ¶ä¼å°ç¨æ·æä¾çç±»SQLæ¥è¯¢è¯æ³æç»ç¼è¯æ为HBase åççFilter 对象ï¼å¹¶æ¯æéè¿HBase Coprocessorè¿ä¸æ¥æåååºé度ã</p> -<pre><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&pageSize=100000 +<div class="highlighter-rouge"><pre class="highlight"><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&pageSize=100000 </code></pre> +</div> -<h2 id="eagleebay">Eagleå¨eBayç使ç¨åºæ¯</h2> +<h2 id="eagleå¨ebayç使ç¨åºæ¯">Eagleå¨eBayç使ç¨åºæ¯</h2> <p>ç®åï¼Eagleçæ°æ®è¡ä¸ºçæ§ç³»ç»å·²ç»é¨ç½²å°ä¸ä¸ªæ¥æ2500å¤ä¸ªèç¹çHadoopé群ä¹ä¸ï¼ç¨ä»¥ä¿æ¤æ°ç¾PBæ°æ®çå®å ¨ï¼å¹¶æ£è®¡åäºä»å¹´å¹´åºä¹åæ©å±å°å ¶ä»ä¸å个Hadoopé群ä¸ï¼ä»èè¦çeBay ææ主è¦Hadoopç10000å¤å°èç¹ãå¨æ们çç产ç¯å¢ä¸ï¼æ们已é对HDFSãHive çé群ä¸çæ°æ®é ç½®äºä¸äºåºç¡çå®å ¨çç¥ï¼å¹¶å°äºå¹´åºä¹åä¸æå¼å ¥æ´å¤ççç¥ï¼ä»¥ç¡®ä¿éè¦æ°æ®çç»å¯¹å®å ¨ãç®åï¼Eagleççç¥æ¶µçå¤ç§æ¨¡å¼ï¼å æ¬ä»è®¿é®æ¨¡å¼ãé¢ç¹è®¿é®æ°æ®éï¼é¢å®ä¹æ¥è¯¢ç±»åãHive 表ååãHBase 表以ååºäºæºå¨å¦ä¹ 模åçæçç¨æ·Profileç¸å ³çææçç¥çãåæ¶ï¼æ们ä¹æ广æ³ççç¥æ¥é²æ¢æ°æ®ç丢失ãæ°æ®è¢«æ·è´å°ä¸å®å ¨å°ç¹ãæææ°æ®è¢«æªææåºå访é®çãEagleçç¥å®ä¹ä¸æ大ççµæ´»æ§åæ©å±æ§ä½¿å¾æ们æªæ¥å¯ä»¥è½»æå°ç»§ç»æ©å±æ´å¤æ´å¤æççç¥ä»¥æ¯ææ´å¤å¤å åç ç¨ä¾åºæ¯ã</p> -<h2 id="section-1">åç»è®¡å</h2> +<h2 id="åç»è®¡å">åç»è®¡å</h2> <p>è¿å»ä¸¤å¹´ä¸ï¼å¨eBay é¤äºè¢«ç¨äºæ°æ®è¡ä¸ºçæ§ä»¥å¤ï¼Eagle æ ¸å¿æ¡æ¶è¿è¢«å¹¿æ³ç¨äºçæ§èç¹å¥åº·ç¶åµãHadoopåºç¨æ§è½ææ ãHadoop æ ¸å¿æå¡ä»¥åæ´ä¸ªHadoopé群çå¥åº·ç¶åµç诸å¤é¢åãæ们è¿å»ºç«ä¸ç³»åçèªå¨åæºå¶ï¼ä¾å¦èç¹ä¿®å¤çï¼å¸®å©æ们平å°é¨é¨æ大å¾èçäºæ们人工å³åï¼å¹¶ææå°æåäºæ´ä¸ªé群èµæºå°å©ç¨çã</p> <p>以ä¸æ¯æ们ç®åæ£å¨å¼åä¸å°ä¸äºç¹æ§ï¼</p> @@ -272,7 +278,7 @@ Eagle æ¯ææ ¹æ®ç¨æ·å¨Hadoopå¹³å°ä¸åå²ä½¿ç¨è¡ä¸ºä¹ æ¯æ¥å®ä¹è¡ </li> </ul> -<h2 id="section-2">å ³äºä½è </h2> +<h2 id="å ³äºä½è ">å ³äºä½è </h2> <p><a href="https://github.com/haoch">é浩</a>ï¼Apache Eagle Committer å PMC æåï¼eBay åæå¹³å°åºç¡æ¶æé¨é¨é«çº§è½¯ä»¶å·¥ç¨å¸ï¼è´è´£Eagleç产å设计ãææ¯æ¶æãæ ¸å¿å®ç°ä»¥åå¼æºç¤¾åºæ¨å¹¿çã</p> <p>æ谢以ä¸æ¥èªApache Eagle社åºåeBayå ¬å¸çèåä½è 们对æ¬æçè´¡ç®ï¼</p> @@ -286,7 +292,7 @@ Eagle æ¯ææ ¹æ®ç¨æ·å¨Hadoopå¹³å°ä¸åå²ä½¿ç¨è¡ä¸ºä¹ æ¯æ¥å®ä¹è¡ <p>eBay åæå¹³å°åºç¡æ¶æé¨ï¼Analytics Data Infrastructureï¼æ¯eBayçå ¨çæ°æ®ååæåºç¡æ¶æé¨é¨ï¼è´è´£eBayå¨æ°æ®åºãæ°æ®ä»åºãHadoopãåå¡æºè½ä»¥åæºå¨å¦ä¹ çå个æ°æ®å¹³å°å¼åã管çç,æ¯æeBayå ¨çåé¨é¨è¿ç¨é«ç«¯çæ°æ®åæ解å³æ¹æ¡ä½åºåæ¶ææçä½ä¸å³çï¼ä¸ºéå¸å ¨ççä¸å¡ç¨æ·æä¾æ°æ®åæ解å³æ¹æ¡ã</p> -<h2 id="section-3">åèèµæ</h2> +<h2 id="åèèµæ">åèèµæ</h2> <ul> <li>Apache Eagle ææ¡£ï¼<a href="http://goeagle.io">http://goeagle.io</a></li> @@ -294,7 +300,7 @@ Eagle æ¯ææ ¹æ®ç¨æ·å¨Hadoopå¹³å°ä¸åå²ä½¿ç¨è¡ä¸ºä¹ æ¯æ¥å®ä¹è¡ <li>Apache Eagle 项ç®ï¼<a href="http://incubator.apache.org/projects/eagle.html">http://incubator.apache.org/projects/eagle.html</a></li> </ul> -<h2 id="section-4">å¼ç¨é¾æ¥</h2> +<h2 id="å¼ç¨é¾æ¥">å¼ç¨é¾æ¥</h2> <ul> <li><strong>CSDN</strong>: <a href="http://www.csdn.net/article/2015-10-29/2826076">http://www.csdn.net/article/2015-10-29/2826076</a></li> <li><strong>OSCHINA</strong>: <a href="http://www.oschina.net/news/67515/apache-eagle">http://www.oschina.net/news/67515/apache-eagle</a></li>