Modified: eagle/site/docs/tutorial/classification.html URL: http://svn.apache.org/viewvc/eagle/site/docs/tutorial/classification.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/docs/tutorial/classification.html (original) +++ eagle/site/docs/tutorial/classification.html Thu Jan 12 07:44:47 2017 @@ -241,30 +241,33 @@ Currently this feature is available ONLY <p>You may configure the default path for Apache Hadoop clients to connect remote hdfs namenode.</p> - <pre><code> classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.fs.defaultFS=hdfs://sandbox.hortonworks.com:8020 </code></pre> + </div> </li> <li> <p>HA case</p> <p>Basically, you point your fs.defaultFS at your nameservice and let the client know how its configured (the backing namenodes) and how to fail over between them under the HA mode</p> - <pre><code> classification.fs.defaultFS=hdfs://nameservice1 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.fs.defaultFS=hdfs://nameservice1 classification.dfs.nameservices=nameservice1 classification.dfs.ha.namenodes.nameservice1=namenode1,namenode2 classification.dfs.namenode.rpc-address.nameservice1.namenode1=hadoopnamenode01:8020 classification.dfs.namenode.rpc-address.nameservice1.namenode2=hadoopnamenode02:8020 classification.dfs.client.failover.proxy.provider.nameservice1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider </code></pre> + </div> </li> <li> <p>Kerberos-secured cluster</p> <p>For Kerberos-secured cluster, you need to get a keytab file and the principal from your admin, and configure âeagle.keytab.fileâ and âeagle.kerberos.principalâ to authenticate its access.</p> - <pre><code> classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab + <div class="highlighter-rouge"><pre class="highlight"><code> classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab [email protected] </code></pre> + </div> <p>If there is an exception about âinvalid server principal nameâ, you may need to check the DNS resolver, or the data transfer , such as âdfs.encrypt.data.transferâ, âdfs.encrypt.data.transfer.algorithmâ, âdfs.trustedchannel.resolver.classâ, âdfs.datatransfer.client.encryptâ.</p> </li> @@ -275,12 +278,13 @@ Currently this feature is available ONLY <li> <p>Basic</p> - <pre><code> classification.accessType=metastoredb_jdbc + <div class="highlighter-rouge"><pre class="highlight"><code> classification.accessType=metastoredb_jdbc classification.password=hive classification.user=hive classification.jdbcDriverClassName=com.mysql.jdbc.Driver classification.jdbcUrl=jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true </code></pre> + </div> </li> </ul> </li> @@ -293,16 +297,17 @@ Currently this feature is available ONLY <p>You need to sett âhbase.zookeeper.quorumâ:âlocalhostâ property and âhbase.zookeeper.property.clientPortâ property.</p> - <pre><code> classification.hbase.zookeeper.property.clientPort=2181 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.hbase.zookeeper.property.clientPort=2181 classification.hbase.zookeeper.quorum=localhost </code></pre> + </div> </li> <li> <p>Kerberos-secured cluster</p> <p>According to your environment, you can add or remove some of the following properties. Here is the reference.</p> - <pre><code> classification.hbase.zookeeper.property.clientPort=2181 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.hbase.zookeeper.property.clientPort=2181 classification.hbase.zookeeper.quorum=localhost classification.hbase.security.authentication=kerberos classification.hbase.master.kerberos.principal=hadoop/[email protected] @@ -310,6 +315,7 @@ Currently this feature is available ONLY classification.eagle.keytab.file=/EAGLE-HOME/.keytab/eagle.keytab [email protected] </code></pre> + </div> </li> </ul> </li>
Modified: eagle/site/docs/tutorial/ldap.html URL: http://svn.apache.org/viewvc/eagle/site/docs/tutorial/ldap.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/docs/tutorial/ldap.html (original) +++ eagle/site/docs/tutorial/ldap.html Thu Jan 12 07:44:47 2017 @@ -219,7 +219,7 @@ <p>Step 1: edit configuration under conf/ldap.properties.</p> -<pre><code>ldap.server=ldap://localhost:10389 +<div class="highlighter-rouge"><pre class="highlight"><code>ldap.server=ldap://localhost:10389 ldap.username=uid=admin,ou=system ldap.password=secret ldap.user.searchBase=ou=Users,o=mojo @@ -228,12 +228,13 @@ ldap.user.groupSearchBase=ou=groups,o=mo acl.adminRole= acl.defaultRole=ROLE_USER </code></pre> +</div> <p>acl.adminRole and acl.defaultRole are two customized properties for Eagle. Eagle manages admin users with groups. If you set acl.adminRole as ROLE_{EAGLE-ADMIN-GROUP-NAME}, members in this group have the admin privilege. acl.defaultRole is ROLE_USER.</p> <p>Step 2: edit conf/eagle-service.conf, and add springActiveProfile=âdefaultâ</p> -<pre><code>eagle{ +<div class="highlighter-rouge"><pre class="highlight"><code>eagle{ service{ storage-type="hbase" hbase-zookeeper-quorum="localhost" @@ -243,6 +244,7 @@ acl.defaultRole=ROLE_USER } } </code></pre> +</div> </div><!--end of loadcontent--> Modified: eagle/site/docs/tutorial/notificationplugin.html URL: http://svn.apache.org/viewvc/eagle/site/docs/tutorial/notificationplugin.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/docs/tutorial/notificationplugin.html (original) +++ eagle/site/docs/tutorial/notificationplugin.html Thu Jan 12 07:44:47 2017 @@ -242,12 +242,12 @@ </li> </ul> -<p><img src="/images/notificationPlugin.png" alt="notificationPlugin" /> -### Customized Notification Plugin</p> +<p><img src="/images/notificationPlugin.png" alt="notificationPlugin" /></p> +<h3 id="customized-notification-plugin">Customized Notification Plugin</h3> <p>To integrate a customized notification plugin, we must implement an interface</p> -<pre><code>public interface NotificationPlugin { +<div class="highlighter-rouge"><pre class="highlight"><code>public interface NotificationPlugin { /** * for initialization * @throws Exception @@ -277,24 +277,26 @@ void onAlert(AlertAPIEntity alertEntity) List<NotificationStatus> getStatusList(); } Examples: AlertKafkaPlugin, AlertEmailPlugin, and AlertEagleStorePlugin. </code></pre> +</div> <p>The second and crucial step is to register the configurations of the customized plugin. In other words, we need persist the configuration template into database in order to expose the configurations to users in the front end.</p> <p>Examples:</p> -<pre><code>{ - "prefix": "alertNotifications", - "tags": { - "notificationType": "kafka" - }, - "className": "org.apache.eagle.notification.plugin.AlertKafkaPlugin", - "description": "send alert to kafka bus", - "enabled":true, - "fields": "[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]" -} -</code></pre> +<div class="highlighter-rouge"><pre class="highlight"><code><span class="p">{</span><span class="w"> + </span><span class="nt">"prefix"</span><span class="p">:</span><span class="w"> </span><span class="s2">"alertNotifications"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"tags"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> + </span><span class="nt">"notificationType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"kafka"</span><span class="w"> + </span><span class="p">},</span><span class="w"> + </span><span class="nt">"className"</span><span class="p">:</span><span class="w"> </span><span class="s2">"org.apache.eagle.notification.plugin.AlertKafkaPlugin"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"send alert to kafka bus"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"enabled"</span><span class="p">:</span><span class="kc">true</span><span class="p">,</span><span class="w"> + </span><span class="nt">"fields"</span><span class="p">:</span><span class="w"> </span><span class="s2">"[{\"name\":\"kafka_broker\",\"value\":\"sandbox.hortonworks.com:6667\"},{\"name\":\"topic\"}]"</span><span class="w"> +</span><span class="p">}</span><span class="w"> +</span></code></pre> +</div> -<p><strong>Note</strong>: <code>fields</code> is the configuration for notification type <code>kafka</code></p> +<p><strong>Note</strong>: <code class="highlighter-rouge">fields</code> is the configuration for notification type <code class="highlighter-rouge">kafka</code></p> <p>How can we do that? <a href="https://github.com/apache/eagle/blob/master/eagle-assembly/src/main/bin/eagle-topology-init.sh">Here</a> are Eagle other notification plugin configurations. Just append yours to it, and run this script when Eagle service is up.</p> Modified: eagle/site/docs/tutorial/policy.html URL: http://svn.apache.org/viewvc/eagle/site/docs/tutorial/policy.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/docs/tutorial/policy.html (original) +++ eagle/site/docs/tutorial/policy.html Thu Jan 12 07:44:47 2017 @@ -242,12 +242,13 @@ <li> <p><strong>Step 2</strong>: Eagle supports a variety of properties for match critera where users can set different values. Eagle also supports window functions to extend policies with time functions.</p> - <pre><code>command = delete + <div class="highlighter-rouge"><pre class="highlight"><code>command = delete (Eagle currently supports the following commands open, delete, copy, append, copy from local, get, move, mkdir, create, list, change permissions) source = /tmp/private (Eagle supports wildcarding for property values for example /tmp/*) </code></pre> + </div> <p><img src="/images/docs/hdfs-policy2.png" alt="HDFS Policies" /></p> </li> @@ -274,12 +275,13 @@ source = /tmp/private <li> <p><strong>Step 2</strong>: Eagle support a variety of properties for match critera where users can set different values. Eagle also supports window functions to extend policies with time functions.</p> - <pre><code>command = Select + <div class="highlighter-rouge"><pre class="highlight"><code>command = Select (Eagle currently supports the following commands DDL statements Create, Drop, Alter, Truncate, Show) sensitivity type = PHONE_NUMBER (Eagle supports classifying data in Hive with different sensitivity types. Users can use these sensitivity types to create policies) </code></pre> + </div> <p><img src="/images/docs/hive-policy2.png" alt="Hive Policies" /></p> </li> Modified: eagle/site/docs/tutorial/site-0.3.0.html URL: http://svn.apache.org/viewvc/eagle/site/docs/tutorial/site-0.3.0.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/docs/tutorial/site-0.3.0.html (original) +++ eagle/site/docs/tutorial/site-0.3.0.html Thu Jan 12 07:44:47 2017 @@ -239,32 +239,35 @@ Here we give configuration examples for <p>You may configure the default path for Hadoop clients to connect remote hdfs namenode.</p> - <pre><code> {"fs.defaultFS":"hdfs://sandbox.hortonworks.com:8020"} -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="nt">"fs.defaultFS"</span><span class="p">:</span><span class="s2">"hdfs://sandbox.hortonworks.com:8020"</span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> <li> <p>HA case</p> <p>Basically, you point your fs.defaultFS at your nameservice and let the client know how its configured (the backing namenodes) and how to fail over between them under the HA mode</p> - <pre><code> {"fs.defaultFS":"hdfs://nameservice1", - "dfs.nameservices": "nameservice1", - "dfs.ha.namenodes.nameservice1":"namenode1,namenode2", - "dfs.namenode.rpc-address.nameservice1.namenode1": "hadoopnamenode01:8020", - "dfs.namenode.rpc-address.nameservice1.namenode2": "hadoopnamenode02:8020", - "dfs.client.failover.proxy.provider.nameservice1": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="nt">"fs.defaultFS"</span><span class="p">:</span><span class="s2">"hdfs://nameservice1"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.nameservices"</span><span class="p">:</span><span class="w"> </span><span class="s2">"nameservice1"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.ha.namenodes.nameservice1"</span><span class="p">:</span><span class="s2">"namenode1,namenode2"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.namenode.rpc-address.nameservice1.namenode1"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hadoopnamenode01:8020"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.namenode.rpc-address.nameservice1.namenode2"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hadoopnamenode02:8020"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"dfs.client.failover.proxy.provider.nameservice1"</span><span class="p">:</span><span class="w"> </span><span class="s2">"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> <li> <p>Kerberos-secured cluster</p> <p>For Kerberos-secured cluster, you need to get a keytab file and the principal from your admin, and configure âeagle.keytab.fileâ and âeagle.kerberos.principalâ to authenticate its access.</p> - <pre><code> { "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab", - "eagle.kerberos.principal":"[email protected]" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nt">"eagle.keytab.file"</span><span class="p">:</span><span class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"eagle.kerberos.principal"</span><span class="p">:</span><span class="s2">"[email protected]"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> <p>If there is an exception about âinvalid server principal nameâ, you may need to check the DNS resolver, or the data transfer , such as âdfs.encrypt.data.transferâ, âdfs.encrypt.data.transfer.algorithmâ, âdfs.trustedchannel.resolver.classâ, âdfs.datatransfer.client.encryptâ.</p> </li> @@ -275,14 +278,15 @@ Here we give configuration examples for <li> <p>Basic</p> - <pre><code> { - "accessType": "metastoredb_jdbc", - "password": "hive", - "user": "hive", - "jdbcDriverClassName": "com.mysql.jdbc.Driver", - "jdbcUrl": "jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="w"> + </span><span class="nt">"accessType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"metastoredb_jdbc"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"password"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"user"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hive"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"jdbcDriverClassName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"com.mysql.jdbc.Driver"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"jdbcUrl"</span><span class="p">:</span><span class="w"> </span><span class="s2">"jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> </ul> </li> @@ -295,27 +299,29 @@ Here we give configuration examples for <p>You need to sett âhbase.zookeeper.quorumâ:âlocalhostâ property and âhbase.zookeeper.property.clientPortâ property.</p> - <pre><code> { - "hbase.zookeeper.property.clientPort":"2181", - "hbase.zookeeper.quorum":"localhost" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="w"> + </span><span class="nt">"hbase.zookeeper.property.clientPort"</span><span class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"hbase.zookeeper.quorum"</span><span class="p">:</span><span class="s2">"localhost"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> <li> <p>Kerberos-secured cluster</p> <p>According to your environment, you can add or remove some of the following properties. Here is the reference.</p> - <pre><code> { - "hbase.zookeeper.property.clientPort":"2181", - "hbase.zookeeper.quorum":"localhost", - "hbase.security.authentication":"kerberos", - "hbase.master.kerberos.principal":"hadoop/[email protected]", - "zookeeper.znode.parent":"/hbase", - "eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab", - "eagle.kerberos.principal":"[email protected]" - } -</code></pre> + <div class="highlighter-rouge"><pre class="highlight"><code><span class="w"> </span><span class="p">{</span><span class="w"> + </span><span class="nt">"hbase.zookeeper.property.clientPort"</span><span class="p">:</span><span class="s2">"2181"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"hbase.zookeeper.quorum"</span><span class="p">:</span><span class="s2">"localhost"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"hbase.security.authentication"</span><span class="p">:</span><span class="s2">"kerberos"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"hbase.master.kerberos.principal"</span><span class="p">:</span><span class="s2">"hadoop/[email protected]"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"zookeeper.znode.parent"</span><span class="p">:</span><span class="s2">"/hbase"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"eagle.keytab.file"</span><span class="p">:</span><span class="s2">"/EAGLE-HOME/.keytab/eagle.keytab"</span><span class="p">,</span><span class="w"> + </span><span class="nt">"eagle.kerberos.principal"</span><span class="p">:</span><span class="s2">"[email protected]"</span><span class="w"> + </span><span class="p">}</span><span class="w"> +</span></code></pre> + </div> </li> </ul> </li> Modified: eagle/site/docs/tutorial/topologymanagement.html URL: http://svn.apache.org/viewvc/eagle/site/docs/tutorial/topologymanagement.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/docs/tutorial/topologymanagement.html (original) +++ eagle/site/docs/tutorial/topologymanagement.html Thu Jan 12 07:44:47 2017 @@ -227,7 +227,7 @@ <p>Application manager consists of a daemon scheduler and an execution module. The scheduler periodically loads user operations(start/stop) from database, and the execution module executes these operations. For more details, please refer to <a href="https://cwiki.apache.org/confluence/display/EAG/Application+Management">here</a>.</p> <h3 id="configurations">Configurations</h3> -<p>The configuration file <code>eagle-scheduler.conf</code> defines scheduler parameters, execution platform settings and parts of default topology configuration.</p> +<p>The configuration file <code class="highlighter-rouge">eagle-scheduler.conf</code> defines scheduler parameters, execution platform settings and parts of default topology configuration.</p> <ul> <li> @@ -321,7 +321,7 @@ <li> <p>Editing eagle-scheduler.conf, and start Eagle service</p> - <pre><code> # enable application manager + <div class="highlighter-rouge"><pre class="highlight"><code> # enable application manager appCommandLoaderEnabled = true # provide jar path @@ -331,9 +331,10 @@ envContextConfig.url = "http://sandbox.hortonworks.com:8744" envContextConfig.nimbusHost = "sandbox.hortonworks.com" </code></pre> + </div> <p>For more configurations, please back to <a href="/docs/configuration.html">Application Configuration</a>. <br /> - After the configuration is ready, start Eagle service <code>bin/eagle-service.sh start</code>.</p> + After the configuration is ready, start Eagle service <code class="highlighter-rouge">bin/eagle-service.sh start</code>.</p> </li> <li> <p>Go to admin page @@ -355,11 +356,11 @@ <li> <p>Go to site page, and add topology configurations.</p> - <p><strong>NOTICE</strong> topology configurations defined here are REQUIRED an extra prefix <code>.app</code></p> + <p><strong>NOTICE</strong> topology configurations defined here are REQUIRED an extra prefix <code class="highlighter-rouge">.app</code></p> <p>Blow are some example configurations for [site=sandbox, applicatoin=hbaseSecurityLog].</p> - <pre><code> classification.hbase.zookeeper.property.clientPort=2181 + <div class="highlighter-rouge"><pre class="highlight"><code> classification.hbase.zookeeper.property.clientPort=2181 classification.hbase.zookeeper.quorum=sandbox.hortonworks.com app.envContextConfig.env=storm @@ -388,6 +389,7 @@ app.eagleProps.eagleService.username=admin app.eagleProps.eagleService.password=secret </code></pre> + </div> <p><img src="/images/appManager/topology-configuration-1.png" alt="topology-configuration-1" /> <img src="/images/appManager/topology-configuration-2.png" alt="topology-configuration-2" /></p> Modified: eagle/site/docs/tutorial/userprofile.html URL: http://svn.apache.org/viewvc/eagle/site/docs/tutorial/userprofile.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/docs/tutorial/userprofile.html (original) +++ eagle/site/docs/tutorial/userprofile.html Thu Jan 12 07:44:47 2017 @@ -232,9 +232,10 @@ is started.</p> <li> <p>Option 1: command line</p> - <pre><code>$ cd <eagle-home>/bin + <div class="highlighter-rouge"><pre class="highlight"><code>$ cd <eagle-home>/bin $ bin/eagle-userprofile-scheduler.sh --site sandbox start </code></pre> + </div> </li> <li> <p>Option 2: start via Apache Ambari @@ -262,8 +263,9 @@ $ bin/eagle-userprofile-scheduler.sh --s <p>submit userProfiles topology if itâs not on <a href="http://sandbox.hortonworks.com:8744">topology UI</a></p> - <pre><code>$ bin/eagle-topology.sh --main org.apache.eagle.security.userprofile.UserProfileDetectionMain --config conf/sandbox-userprofile-topology.conf start + <div class="highlighter-rouge"><pre class="highlight"><code>$ bin/eagle-topology.sh --main org.apache.eagle.security.userprofile.UserProfileDetectionMain --config conf/sandbox-userprofile-topology.conf start </code></pre> + </div> </li> <li> <p><strong>Option 2</strong>: Apache Ambari</p> @@ -278,23 +280,26 @@ $ bin/eagle-userprofile-scheduler.sh --s <li>Prepare sample data for ML training and validation sample data <ul> <li>a. Download following sample data to be used for training</li> - <li><a href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code>user1.hdfs-audit.2015-10-11-00.txt</code></a></li> - <li><a href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code>user1.hdfs-audit.2015-10-11-01.txt</code></a></li> - <li>b. Downlaod <a href="/data/userprofile-validate.txt"><code>userprofile-validate.txt</code></a>file which contains data points that you can try to test the models</li> + </ul> + <ul> + <li><a href="/data/user1.hdfs-audit.2015-10-11-00.txt"><code class="highlighter-rouge">user1.hdfs-audit.2015-10-11-00.txt</code></a></li> + <li><a href="/data/user1.hdfs-audit.2015-10-11-01.txt"><code class="highlighter-rouge">user1.hdfs-audit.2015-10-11-01.txt</code></a> + * b. Downlaod <a href="/data/userprofile-validate.txt"><code class="highlighter-rouge">userprofile-validate.txt</code></a>file which contains data points that you can try to test the models</li> </ul> </li> <li>Copy the files (downloaded in the previous step) into a location in sandbox -For example: <code>/usr/hdp/current/eagle/lib/userprofile/data/</code></li> - <li>Modify <code><Eagle-home>/conf/sandbox-userprofile-scheduler.conf </code> -update <code>training-audit-path</code> to set to the path for training data sample (the path you used for Step 1.a) +For example: <code class="highlighter-rouge">/usr/hdp/current/eagle/lib/userprofile/data/</code></li> + <li>Modify <code class="highlighter-rouge"><Eagle-home>/conf/sandbox-userprofile-scheduler.conf </code> +update <code class="highlighter-rouge">training-audit-path</code> to set to the path for training data sample (the path you used for Step 1.a) update detection-audit-path to set to the path for validation (the path you used for Step 1.b)</li> <li>Run ML training program from eagle UI</li> <li> <p>Produce Apache Kafka data using the contents from validate file (Step 1.b) -Run the command (assuming the eagle configuration uses Kafka topic <code>sandbox_hdfs_audit_log</code>)</p> +Run the command (assuming the eagle configuration uses Kafka topic <code class="highlighter-rouge">sandbox_hdfs_audit_log</code>)</p> - <pre><code> ./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic sandbox_hdfs_audit_log + <div class="highlighter-rouge"><pre class="highlight"><code> ./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic sandbox_hdfs_audit_log </code></pre> + </div> </li> <li>Paste few lines of data from file validate onto kafka-console-producer Check <a href="http://localhost:9099/eagle-service/#/dam/alertList">http://localhost:9099/eagle-service/#/dam/alertList</a> for generated alerts</li> Modified: eagle/site/feed.xml URL: http://svn.apache.org/viewvc/eagle/site/feed.xml?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/feed.xml (original) +++ eagle/site/feed.xml Thu Jan 12 07:44:47 2017 @@ -5,9 +5,9 @@ <description>Eagle - Analyze Big Data Platforms for Security and Performance</description> <link>http://goeagle.io/</link> <atom:link href="http://goeagle.io/feed.xml" rel="self" type="application/rss+xml"/> - <pubDate>Tue, 03 Jan 2017 09:20:56 +0800</pubDate> - <lastBuildDate>Tue, 03 Jan 2017 09:20:56 +0800</lastBuildDate> - <generator>Jekyll v2.5.3</generator> + <pubDate>Thu, 12 Jan 2017 15:28:13 +0800</pubDate> + <lastBuildDate>Thu, 12 Jan 2017 15:28:13 +0800</lastBuildDate> + <generator>Jekyll v3.3.1</generator> <item> <title>Apache Eagle æ£å¼åå¸ï¼åå¸å¼å®æ¶Hadoopæ°æ®å®å ¨æ¹æ¡</title> @@ -17,7 +17,7 @@ <p>æ¥åï¼eBayå ¬å¸éé宣叿£å¼å弿ºä¸çæ¨åºåå¸å¼å®æ¶å®å ¨çæ§æ¹æ¡ ï¼ Apache Eagle (http://goeagle.io)ï¼è¯¥é¡¹ç®å·²äº2015å¹´10æ26æ¥æ£å¼å å ¥Apache æä¸ºåµåå¨é¡¹ç®ãApache Eagleæä¾ä¸å¥é«æåå¸å¼çæµå¼çç¥å¼æï¼å ·æé«å®æ¶ãå¯ä¼¸ç¼©ãææ©å±ã交äºå好çç¹ç¹ï¼åæ¶éææºå¨å¦ä¹ å¯¹ç¨æ·è¡ä¸ºå»ºç«Profile以å®ç°æºè½å®æ¶å°ä¿æ¤Hadoopçæç³»ç»ä¸å¤§æ°æ®çå®å ¨ã</p> -<h2 id="section">èæ¯</h2> +<h2 id="èæ¯">èæ¯</h2> <p>éçå¤§æ°æ®çåå±ï¼è¶æ¥è¶å¤çæåä¼ä¸æè ç»ç»å¼å§éåæ°æ®é©±å¨åä¸çè¿ä½æ¨¡å¼ãå¨eBayï¼æä»¬æ¥ææ°ä¸åå·¥ç¨å¸ãåæå¸åæ°æ®ç§å¦å®¶ï¼ä»ä»¬æ¯å¤©è®¿é®åææ°PBçº§çæ°æ®ï¼ä»¥ä¸ºæä»¬çç¨æ·å¸¦æ¥æ ä¸ä¼¦æ¯çä½éªãå¨å ¨çä¸å¡ä¸ï¼æä»¬ä¹å¹¿æ³å°å©ç¨æµ·éå¤§æ°æ®æ¥è¿æ¥æä»¬æ°ä»¥äº¿è®¡çç¨æ·ã</p> <p>è¿å¹´æ¥ï¼Hadoopå·²ç»éæ¸æä¸ºå¤§æ°æ®åæé¢åæå欢è¿çè§£å³æ¹æ¡ï¼eBayä¹ä¸ç´å¨ä½¿ç¨Hadoopææ¯ä»æ°æ®ä¸ææä»·å¼ï¼ä¾å¦ï¼æä»¬éè¿å¤§æ°æ®æé«ç¨æ·çæç´¢ä½éªï¼è¯å«åä¼åç²¾åå¹¿åææ¾ï¼å 宿们ç产åç®å½ï¼ä»¥åéè¿ç¹å»æµåæä»¥çè§£ç¨æ·å¦ä½ä½¿ç¨æä»¬çå¨çº¿å¸åºå¹³å°çã</p> @@ -54,20 +54,20 @@ <li><strong>弿º</strong>ï¼Eagleä¸ç´æ ¹æ®å¼æºçæ åå¼åï¼å¹¶æå»ºäºè¯¸å¤å¤§æ°æ®é¢åç弿ºäº§åä¹ä¸ï¼å æ¤æä»¬å³å®ä»¥Apache许å¯è¯å¼æºEagleï¼ä»¥åé¦ç¤¾åºï¼åæ¶ä¹æå¾ è·å¾ç¤¾åºçåé¦ãåä½ä¸æ¯æã</li> </ul> -<h2 id="eagle">Eagleæ¦è§</h2> +<h2 id="eagleæ¦è§">Eagleæ¦è§</h2> <p><img src="/images/posts/eagle-group.png" alt="" /></p> -<h4 id="data-collection-and-storage">æ°æ®æµæ¥å ¥ååå¨ï¼Data Collection and Storageï¼</h4> +<h4 id="æ°æ®æµæ¥å ¥ååå¨data-collection-and-storage">æ°æ®æµæ¥å ¥ååå¨ï¼Data Collection and Storageï¼</h4> <p>Eagleæä¾é«åº¦å¯æ©å±çç¼ç¨APIï¼å¯ä»¥æ¯æå°ä»»ä½ç±»åçæ°æ®æºéæå°Eagleççç¥æ§è¡å¼æä¸ãä¾å¦ï¼å¨Eagle HDFS 审计äºä»¶ï¼Auditï¼çæ§æ¨¡åä¸ï¼éè¿Kafkaæ¥å®æ¶æ¥æ¶æ¥èªNamenode Log4j Appender æè Logstash Agent æ¶éçæ°æ®ï¼å¨Eagle Hive çæ§æ¨¡åä¸ï¼éè¿YARN API æ¶éæ£å¨è¿è¡JobçHive æ¥è¯¢æ¥å¿ï¼å¹¶ä¿è¯æ¯è¾é«çå¯ä¼¸ç¼©æ§å容鿧ã</p> -<h4 id="data-processing">æ°æ®å®æ¶å¤çï¼Data Processingï¼</h4> +<h4 id="æ°æ®å®æ¶å¤çdata-processing">æ°æ®å®æ¶å¤çï¼Data Processingï¼</h4> <p><strong>æµå¤çAPIï¼Stream Processing APIï¼Eagle</strong> æä¾ç¬ç«äºç©çå¹³å°èé«åº¦æ½è±¡çæµå¤çAPIï¼ç®åé»è®¤æ¯æApache Stormï¼ä½æ¯ä¹å 许æ©å±å°å ¶ä»ä»»ææµå¤çå¼æï¼æ¯å¦Flink æè Samzaçãè¯¥å±æ½è±¡å 许å¼åè å¨å®ä¹çæ§æ°æ®å¤çé»è¾æ¶ï¼æ éå¨ç©çæ§è¡å±ç»å®ä»»ä½ç¹å®æµå¤çå¹³å°ï¼èåªééè¿å¤ç¨ãæ¼æ¥åç»è£ ä¾å¦æ°æ®è½¬æ¢ãè¿æ»¤ãå¤é¨æ°æ®Joinçç»ä»¶ï¼ä»¥å®ç°æ»¡è¶³éæ±çDAGï¼æåæ ç¯å¾ï¼ï¼åæ¶ï¼å¼å� � ä¹å¯ä»¥å¾å®¹æå°ä»¥ç¼ç¨å°æ¹å¼å°ä¸å¡é»è¾æµç¨åEagle çç¥å¼ææ¡æ¶éæèµ·æ¥ãEagleæ¡æ¶å é¨ä¼å°æè¿°ä¸å¡é»è¾çDAGç¼è¯æåºå±æµå¤çæ¶æçåçåºç¨ï¼ä¾å¦Apache Storm Topology çï¼ä»äºå®ç°å¹³å°çç¬ç«ã</p> <p><strong>以䏿¯ä¸ä¸ªEagleå¦ä½å¤çäºä»¶ååè¦ç示ä¾ï¼</strong></p> -<pre><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env +<div class="highlighter-rouge"><pre class="highlight"><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare kafka source .flatMap(new AuditLogTransformer()) // transform event .groupBy(Arrays.asList(0)) // group by 1st field @@ -75,6 +75,7 @@ StreamProducer producer = env.newSource( .alertWithConsumer(âuserActivityâ,âuserProfileExecutorâ) // ML policy evaluation env.execute(); // execute stream processing and alert </code></pre> +</div> <p><strong>åè¦æ¡æ¶ï¼Alerting Frameworkï¼Eagle</strong>åè¦æ¡æ¶ç±æµå æ°æ®APIãçç¥å¼ææå¡æä¾APIãçç¥Partitioner API 以åé¢è¦å»éæ¡æ¶çç»æ:</p> @@ -84,7 +85,7 @@ env.execute(); // execute stream process <li> <p><strong>æ©å±æ§</strong> Eagleççç¥å¼ææå¡æä¾APIå è®¸ä½ æå ¥æ°ççç¥å¼æ</p> - <pre><code> public interface PolicyEvaluatorServiceProvider { + <div class="highlighter-rouge"><pre class="highlight"><code> public interface PolicyEvaluatorServiceProvider { public String getPolicyType(); // literal string to identify one type of policy public Class&lt;? extends PolicyEvaluator&gt; getPolicyEvaluator(); // get policy evaluator implementation public List&lt;Module&gt; getBindingModules(); // policy text with json format to object mapping @@ -95,15 +96,17 @@ env.execute(); // execute stream process public void onPolicyDelete(); // invoked when policy is deleted } </code></pre> + </div> </li> <li><strong>çç¥Partitioner API</strong> å 许çç¥å¨ä¸åçç©çèç¹ä¸å¹¶è¡æ§è¡ãä¹å è®¸ä½ èªå®ä¹çç¥Partitionerç±»ãè¿äºåè½ä½¿å¾çç¥åäºä»¶å®å ¨ä»¥åå¸å¼çæ¹å¼æ§è¡ã</li> <li> <p><strong>å¯ä¼¸ç¼©æ§</strong> Eagle éè¿æ¯æçç¥çååºæ¥å£æ¥å®ç°å¤§éççç¥å¯ä¼¸ç¼©å¹¶åå°è¿è¡</p> - <pre><code> public interface PolicyPartitioner extends Serializable { + <div class="highlighter-rouge"><pre class="highlight"><code> public interface PolicyPartitioner extends Serializable { int partition(int numTotalPartitions, String policyType, String policyId); // method to distribute policies } </code></pre> + </div> <p><img src="/images/posts/policy-partition.png" alt="" /></p> @@ -160,26 +163,29 @@ Eagle æ¯ææ ¹æ®ç¨æ <li> <p>åä¸äºä»¶æ§è¡çç¥ï¼ç¨æ·è®¿é®Hiveä¸çæææ°æ®åï¼</p> - <pre><code> from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream; + <div class="highlighter-rouge"><pre class="highlight"><code> from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream; </code></pre> + </div> </li> <li> <p>åºäºçªå£ççç¥ï¼ç¨æ·å¨10åéå 访é®ç®å½ /tmp/private å¤ä½ 5次ï¼</p> - <pre><code> hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into outputStream; + <div class="highlighter-rouge"><pre class="highlight"><code> hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue &gt;= 5 insert into outputStream; </code></pre> + </div> </li> </ul> <p><strong>æ¥è¯¢æå¡ï¼Query Serviceï¼</strong> Eagle æä¾ç±»SQLçREST APIç¨æ¥å®ç°éå¯¹æµ·éæ°æ®éç综å计ç®ãæ¥è¯¢ååæçè½åï¼æ¯æä¾å¦è¿æ»¤ãèåãç´æ¹è¿ç®ãæåºãtopãç®æ¯è¡¨è¾¾å¼ä»¥åå页çãEagleä¼å æ¯æHBase ä½ä¸ºå ¶é»è®¤æ°æ®åå¨ï¼ä½æ¯åæ¶ä¹æ¯æåºJDBCçå ³ç³»åæ°æ®åºãç¹å«æ¯å½éæ©ä»¥HBaseä½ä¸ºå卿¶ï¼Eagle便åçæ¥æäºHBaseåå¨åæ¥è¯¢æµ·éçæ§æ°æ®çè½åï¼Eagle æ¥è¯¢æ¡æ¶ä¼å°ç¨æ·æä¾çç±»SQLæ¥è¯¢è¯æ³æç» ç¼è¯æä¸ºHBase åççFilter 对象ï¼å¹¶æ¯æéè¿HBase Coprocessorè¿ä¸æ¥æåååºé度ã</p> -<pre><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000 +<div class="highlighter-rouge"><pre class="highlight"><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&amp;pageSize=100000 </code></pre> +</div> -<h2 id="eagleebay">Eagleå¨eBayç使ç¨åºæ¯</h2> +<h2 id="eagleå¨ebayç使ç¨åºæ¯">Eagleå¨eBayç使ç¨åºæ¯</h2> <p>ç®åï¼Eagleçæ°æ®è¡ä¸ºçæ§ç³»ç»å·²ç»é¨ç½²å°ä¸ä¸ªæ¥æ2500å¤ä¸ªèç¹çHadoopé群ä¹ä¸ï¼ç¨ä»¥ä¿æ¤æ°ç¾PBæ°æ®çå®å ¨ï¼å¹¶æ£è®¡åäºä»å¹´å¹´åºä¹åæ©å±å°å ¶ä»ä¸å个Hadoopé群ä¸ï¼ä»èè¦çeBay ææä¸»è¦Hadoopç10000å¤å°èç¹ã卿们çç产ç¯å¢ä¸ï¼æä»¬å·²é对HDFSãHive çé群ä¸çæ°æ®é ç½®äºä¸äºåºç¡çå®å ¨çç¥ï¼å¹¶å°äºå¹´åºä¹å䏿å¼å ¥æ´å¤ççç¥ï¼ä»¥ç¡®ä¿éè¦æ°æ®çç»å¯¹å®å ¨ãç®åï¼Eagleççç¥æ¶µçå¤ç§� �模å¼ï¼å æ¬ä»è®¿é®æ¨¡å¼ãé¢ç¹è®¿é®æ°æ®éï¼é¢å®ä¹æ¥è¯¢ç±»åãHive 表ååãHBase 表以ååºäºæºå¨å¦ä¹ 模åçæçç¨æ·Profileç¸å ³çææçç¥çãåæ¶ï¼æä»¬ä¹æå¹¿æ³ççç¥æ¥é²æ¢æ°æ®çä¸¢å¤±ãæ°æ®è¢«æ·è´å°ä¸å®å ¨å°ç¹ãæææ°æ®è¢«æªææåºå访é®çãEagleçç¥å®ä¹ä¸æå¤§ççµæ´»æ§åæ©å±æ§ä½¿å¾æä»¬æªæ¥å¯ä»¥è½»æå°ç»§ç»æ©å±æ´å¤æ´å¤æççç¥ä»¥æ¯ææ´å¤å¤å åçç¨ä¾åºæ¯ã</p> -<h2 id="section-1">åç»è®¡å</h2> +<h2 id="åç»è®¡å">åç»è®¡å</h2> <p>è¿å»ä¸¤å¹´ä¸ï¼å¨eBay é¤äºè¢«ç¨äºæ°æ®è¡ä¸ºçæ§ä»¥å¤ï¼Eagle æ ¸å¿æ¡æ¶è¿è¢«å¹¿æ³ç¨äºçæ§èç¹å¥åº·ç¶åµãHadoopåºç¨æ§è½ææ ãHadoop æ ¸å¿æå¡ä»¥åæ´ä¸ªHadoopé群çå¥åº·ç¶åµç诸å¤é¢åãæä»¬è¿å»ºç«ä¸ç³»åçèªå¨åæºå¶ï¼ä¾å¦èç¹ä¿®å¤çï¼å¸®å©æä»¬å¹³å°é¨é¨æå¤§å¾èçäºæä»¬äººå·¥å³åï¼å¹¶ææå°æåäºæ´ä¸ªéç¾¤èµæºå°å©ç¨çã</p> <p>以䏿¯æä»¬ç®åæ£å¨å¼åä¸å°ä¸äºç¹æ§ï¼</p> @@ -196,7 +202,7 @@ Eagle æ¯ææ ¹æ®ç¨æ </li> </ul> -<h2 id="section-2">å ³äºä½è </h2> +<h2 id="å ³äºä½è ">å ³äºä½è </h2> <p><a href="https://github.com/haoch">éæµ©</a>ï¼Apache Eagle Committer å PMC æåï¼eBay åæå¹³å°åºç¡æ¶æé¨é¨é«çº§è½¯ä»¶å·¥ç¨å¸ï¼è´è´£Eagleç产åè®¾è®¡ãææ¯æ¶æãæ ¸å¿å®ç°ä»¥å弿ºç¤¾åºæ¨å¹¿çã</p> <p>æè°¢ä»¥ä¸æ¥èªApache Eagle社åºåeBayå ¬å¸çèåä½è ä»¬å¯¹æ¬æçè´¡ç®ï¼</p> @@ -210,7 +216,7 @@ Eagle æ¯ææ ¹æ®ç¨æ <p>eBay åæå¹³å°åºç¡æ¶æé¨ï¼Analytics Data Infrastructureï¼æ¯eBayçå ¨çæ°æ®ååæåºç¡æ¶æé¨é¨ï¼è´è´£eBay卿°æ®åºãæ°æ®ä»åºãHadoopãå塿ºè½ä»¥åæºå¨å¦ä¹ çåä¸ªæ°æ®å¹³å°å¼åã管çç,æ¯æeBayå ¨çåé¨é¨è¿ç¨é«ç«¯çæ°æ®åæè§£å³æ¹æ¡ä½åºåæ¶ææçä½ä¸å³çï¼ä¸ºéå¸å ¨ççä¸å¡ç¨æ·æä¾æ°æ®åæè§£å³æ¹æ¡ã</p> -<h2 id="section-3">åèèµæ</h2> +<h2 id="åèèµæ">åèèµæ</h2> <ul> <li>Apache Eagle ææ¡£ï¼<a href="http://goeagle.io">http://goeagle.io</a></li> @@ -218,7 +224,7 @@ Eagle æ¯ææ ¹æ®ç¨æ <li>Apache Eagle 项ç®ï¼<a href="http://incubator.apache.org/projects/eagle.html">http://incubator.apache.org/projects/eagle.html</a></li> </ul> -<h2 id="section-4">å¼ç¨é¾æ¥</h2> +<h2 id="å¼ç¨é¾æ¥">å¼ç¨é¾æ¥</h2> <ul> <li><strong>CSDN</strong>: <a href="http://www.csdn.net/article/2015-10-29/2826076">http://www.csdn.net/article/2015-10-29/2826076</a></li> <li><strong>OSCHINA</strong>: <a href="http://www.oschina.net/news/67515/apache-eagle">http://www.oschina.net/news/67515/apache-eagle</a></li> Modified: eagle/site/index.html URL: http://svn.apache.org/viewvc/eagle/site/index.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/index.html (original) +++ eagle/site/index.html Thu Jan 12 07:44:47 2017 @@ -108,7 +108,10 @@ <br/> <p style="width:80%; margin-left:auto; margin-right:auto;"> Big data platform normally generates huge amount of operational logs and metrics in realtime. Eagle is founded to solve hard problems in securing and tuning performance for big data platforms by ensuring metrics, logs always available and alerting immediately even under huge traffic.</p> <div class="sepline"></div> - <P>Eagle has been accepted as an Apache Incubator Project on Oct 26, 2015.</P> + <P>Eagle is accounced to be a Top Level Project (TLP) of Apache Software Foundation (ASF) on Jan. 10, 2017.</p> + <!-- + <p>Eagle has been accepted as an Apache Incubator Project on Oct 26, 2015.</P> + --> <div class="sepline"></div> <p>Eagle analyzes big data platforms and reports issues in 3 steps:</p> </div> Modified: eagle/site/post/2015/10/27/apache-eagle-announce-cn.html URL: http://svn.apache.org/viewvc/eagle/site/post/2015/10/27/apache-eagle-announce-cn.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/post/2015/10/27/apache-eagle-announce-cn.html (original) +++ eagle/site/post/2015/10/27/apache-eagle-announce-cn.html Thu Jan 12 07:44:47 2017 @@ -93,7 +93,7 @@ <p>æ¥åï¼eBayå ¬å¸éé宣叿£å¼å弿ºä¸çæ¨åºåå¸å¼å®æ¶å®å ¨çæ§æ¹æ¡ ï¼ Apache Eagle (http://goeagle.io)ï¼è¯¥é¡¹ç®å·²äº2015å¹´10æ26æ¥æ£å¼å å ¥Apache æä¸ºåµåå¨é¡¹ç®ãApache Eagleæä¾ä¸å¥é«æåå¸å¼çæµå¼çç¥å¼æï¼å ·æé«å®æ¶ãå¯ä¼¸ç¼©ãææ©å±ã交äºå好çç¹ç¹ï¼åæ¶éææºå¨å¦ä¹ å¯¹ç¨æ·è¡ä¸ºå»ºç«Profile以å®ç°æºè½å®æ¶å°ä¿æ¤Hadoopçæç³»ç»ä¸å¤§æ°æ®çå®å ¨ã</p> -<h2 id="section">èæ¯</h2> +<h2 id="èæ¯">èæ¯</h2> <p>éçå¤§æ°æ®çåå±ï¼è¶æ¥è¶å¤çæåä¼ä¸æè ç»ç»å¼å§éåæ°æ®é©±å¨åä¸çè¿ä½æ¨¡å¼ãå¨eBayï¼æä»¬æ¥ææ°ä¸åå·¥ç¨å¸ãåæå¸åæ°æ®ç§å¦å®¶ï¼ä»ä»¬æ¯å¤©è®¿é®åææ°PBçº§çæ°æ®ï¼ä»¥ä¸ºæä»¬çç¨æ·å¸¦æ¥æ ä¸ä¼¦æ¯çä½éªãå¨å ¨çä¸å¡ä¸ï¼æä»¬ä¹å¹¿æ³å°å©ç¨æµ·éå¤§æ°æ®æ¥è¿æ¥æä»¬æ°ä»¥äº¿è®¡çç¨æ·ã</p> <p>è¿å¹´æ¥ï¼Hadoopå·²ç»éæ¸æä¸ºå¤§æ°æ®åæé¢åæå欢è¿çè§£å³æ¹æ¡ï¼eBayä¹ä¸ç´å¨ä½¿ç¨Hadoopææ¯ä»æ°æ®ä¸ææä»·å¼ï¼ä¾å¦ï¼æä»¬éè¿å¤§æ°æ®æé«ç¨æ·çæç´¢ä½éªï¼è¯å«åä¼åç²¾åå¹¿åææ¾ï¼å 宿们ç产åç®å½ï¼ä»¥åéè¿ç¹å»æµåæä»¥çè§£ç¨æ·å¦ä½ä½¿ç¨æä»¬çå¨çº¿å¸åºå¹³å°çã</p> @@ -130,20 +130,20 @@ <li><strong>弿º</strong>ï¼Eagleä¸ç´æ ¹æ®å¼æºçæ åå¼åï¼å¹¶æå»ºäºè¯¸å¤å¤§æ°æ®é¢åç弿ºäº§åä¹ä¸ï¼å æ¤æä»¬å³å®ä»¥Apache许å¯è¯å¼æºEagleï¼ä»¥åé¦ç¤¾åºï¼åæ¶ä¹æå¾ è·å¾ç¤¾åºçåé¦ãåä½ä¸æ¯æã</li> </ul> -<h2 id="eagle">Eagleæ¦è§</h2> +<h2 id="eagleæ¦è§">Eagleæ¦è§</h2> <p><img src="/images/posts/eagle-group.png" alt="" /></p> -<h4 id="data-collection-and-storage">æ°æ®æµæ¥å ¥ååå¨ï¼Data Collection and Storageï¼</h4> +<h4 id="æ°æ®æµæ¥å ¥ååå¨data-collection-and-storage">æ°æ®æµæ¥å ¥ååå¨ï¼Data Collection and Storageï¼</h4> <p>Eagleæä¾é«åº¦å¯æ©å±çç¼ç¨APIï¼å¯ä»¥æ¯æå°ä»»ä½ç±»åçæ°æ®æºéæå°Eagleççç¥æ§è¡å¼æä¸ãä¾å¦ï¼å¨Eagle HDFS 审计äºä»¶ï¼Auditï¼çæ§æ¨¡åä¸ï¼éè¿Kafkaæ¥å®æ¶æ¥æ¶æ¥èªNamenode Log4j Appender æè Logstash Agent æ¶éçæ°æ®ï¼å¨Eagle Hive çæ§æ¨¡åä¸ï¼éè¿YARN API æ¶éæ£å¨è¿è¡JobçHive æ¥è¯¢æ¥å¿ï¼å¹¶ä¿è¯æ¯è¾é«çå¯ä¼¸ç¼©æ§å容鿧ã</p> -<h4 id="data-processing">æ°æ®å®æ¶å¤çï¼Data Processingï¼</h4> +<h4 id="æ°æ®å®æ¶å¤çdata-processing">æ°æ®å®æ¶å¤çï¼Data Processingï¼</h4> <p><strong>æµå¤çAPIï¼Stream Processing APIï¼Eagle</strong> æä¾ç¬ç«äºç©çå¹³å°èé«åº¦æ½è±¡çæµå¤çAPIï¼ç®åé»è®¤æ¯æApache Stormï¼ä½æ¯ä¹å 许æ©å±å°å ¶ä»ä»»ææµå¤çå¼æï¼æ¯å¦Flink æè Samzaçãè¯¥å±æ½è±¡å 许å¼åè å¨å®ä¹çæ§æ°æ®å¤çé»è¾æ¶ï¼æ éå¨ç©çæ§è¡å±ç»å®ä»»ä½ç¹å®æµå¤çå¹³å°ï¼èåªééè¿å¤ç¨ãæ¼æ¥åç»è£ ä¾å¦æ°æ®è½¬æ¢ãè¿æ»¤ãå¤é¨æ°æ®Joinçç»ä»¶ï¼ä»¥å®ç°æ»¡è¶³éæ±çDAGï¼æåæ ç¯å¾ï¼ï¼åæ¶ï¼å¼åè ä¹å¯� �»¥å¾å®¹æå°ä»¥ç¼ç¨å°æ¹å¼å°ä¸å¡é»è¾æµç¨åEagle çç¥å¼ææ¡æ¶éæèµ·æ¥ãEagleæ¡æ¶å é¨ä¼å°æè¿°ä¸å¡é»è¾çDAGç¼è¯æåºå±æµå¤çæ¶æçåçåºç¨ï¼ä¾å¦Apache Storm Topology çï¼ä»äºå®ç°å¹³å°çç¬ç«ã</p> <p><strong>以䏿¯ä¸ä¸ªEagleå¦ä½å¤çäºä»¶ååè¦ç示ä¾ï¼</strong></p> -<pre><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env +<div class="highlighter-rouge"><pre class="highlight"><code>StormExecutionEnvironment env = ExecutionEnvironmentFactory.getStorm(config); // storm env StreamProducer producer = env.newSource(new KafkaSourcedSpoutProvider().getSpout(config)).renameOutputFields(1) // declare kafka source .flatMap(new AuditLogTransformer()) // transform event .groupBy(Arrays.asList(0)) // group by 1st field @@ -151,6 +151,7 @@ StreamProducer producer = env.newSource( .alertWithConsumer(âuserActivityâ,âuserProfileExecutorâ) // ML policy evaluation env.execute(); // execute stream processing and alert </code></pre> +</div> <p><strong>åè¦æ¡æ¶ï¼Alerting Frameworkï¼Eagle</strong>åè¦æ¡æ¶ç±æµå æ°æ®APIãçç¥å¼ææå¡æä¾APIãçç¥Partitioner API 以åé¢è¦å»éæ¡æ¶çç»æ:</p> @@ -160,7 +161,7 @@ env.execute(); // execute stream process <li> <p><strong>æ©å±æ§</strong> Eagleççç¥å¼ææå¡æä¾APIå è®¸ä½ æå ¥æ°ççç¥å¼æ</p> - <pre><code> public interface PolicyEvaluatorServiceProvider { + <div class="highlighter-rouge"><pre class="highlight"><code> public interface PolicyEvaluatorServiceProvider { public String getPolicyType(); // literal string to identify one type of policy public Class<? extends PolicyEvaluator> getPolicyEvaluator(); // get policy evaluator implementation public List<Module> getBindingModules(); // policy text with json format to object mapping @@ -171,15 +172,17 @@ env.execute(); // execute stream process public void onPolicyDelete(); // invoked when policy is deleted } </code></pre> + </div> </li> <li><strong>çç¥Partitioner API</strong> å 许çç¥å¨ä¸åçç©çèç¹ä¸å¹¶è¡æ§è¡ãä¹å è®¸ä½ èªå®ä¹çç¥Partitionerç±»ãè¿äºåè½ä½¿å¾çç¥åäºä»¶å®å ¨ä»¥åå¸å¼çæ¹å¼æ§è¡ã</li> <li> <p><strong>å¯ä¼¸ç¼©æ§</strong> Eagle éè¿æ¯æçç¥çååºæ¥å£æ¥å®ç°å¤§éççç¥å¯ä¼¸ç¼©å¹¶åå°è¿è¡</p> - <pre><code> public interface PolicyPartitioner extends Serializable { + <div class="highlighter-rouge"><pre class="highlight"><code> public interface PolicyPartitioner extends Serializable { int partition(int numTotalPartitions, String policyType, String policyId); // method to distribute policies } </code></pre> + </div> <p><img src="/images/posts/policy-partition.png" alt="" /></p> @@ -236,26 +239,29 @@ Eagle æ¯ææ ¹æ®ç¨æ <li> <p>åä¸äºä»¶æ§è¡çç¥ï¼ç¨æ·è®¿é®Hiveä¸çæææ°æ®åï¼</p> - <pre><code> from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream; + <div class="highlighter-rouge"><pre class="highlight"><code> from hiveAccessLogStream[sensitivityType=='PHONE_NUMBER'] select * insert into outputStream; </code></pre> + </div> </li> <li> <p>åºäºçªå£ççç¥ï¼ç¨æ·å¨10åéå 访é®ç®å½ /tmp/private å¤ä½ 5次ï¼</p> - <pre><code> hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue >= 5 insert into outputStream; + <div class="highlighter-rouge"><pre class="highlight"><code> hdfsAuditLogEventStream[(src == '/tmp/private')]#window.externalTime(timestamp,10 min) select user, count(timestamp) as aggValue group by user having aggValue >= 5 insert into outputStream; </code></pre> + </div> </li> </ul> <p><strong>æ¥è¯¢æå¡ï¼Query Serviceï¼</strong> Eagle æä¾ç±»SQLçREST APIç¨æ¥å®ç°éå¯¹æµ·éæ°æ®éç综å计ç®ãæ¥è¯¢ååæçè½åï¼æ¯æä¾å¦è¿æ»¤ãèåãç´æ¹è¿ç®ãæåºãtopãç®æ¯è¡¨è¾¾å¼ä»¥åå页çãEagleä¼å æ¯æHBase ä½ä¸ºå ¶é»è®¤æ°æ®åå¨ï¼ä½æ¯åæ¶ä¹æ¯æåºJDBCçå ³ç³»åæ°æ®åºãç¹å«æ¯å½éæ©ä»¥HBaseä½ä¸ºå卿¶ï¼Eagle便åçæ¥æäºHBaseåå¨åæ¥è¯¢æµ·éçæ§æ°æ®çè½åï¼Eagle æ¥è¯¢æ¡æ¶ä¼å°ç¨æ·æä¾çç±»SQLæ¥è¯¢è¯æ³æç»ç¼è¯æ 为HBase åççFilter 对象ï¼å¹¶æ¯æéè¿HBase Coprocessorè¿ä¸æ¥æåååºé度ã</p> -<pre><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&pageSize=100000 +<div class="highlighter-rouge"><pre class="highlight"><code>query=AlertDefinitionService[@dataSource="hiveQueryLog"]{@policyDef}&pageSize=100000 </code></pre> +</div> -<h2 id="eagleebay">Eagleå¨eBayç使ç¨åºæ¯</h2> +<h2 id="eagleå¨ebayç使ç¨åºæ¯">Eagleå¨eBayç使ç¨åºæ¯</h2> <p>ç®åï¼Eagleçæ°æ®è¡ä¸ºçæ§ç³»ç»å·²ç»é¨ç½²å°ä¸ä¸ªæ¥æ2500å¤ä¸ªèç¹çHadoopé群ä¹ä¸ï¼ç¨ä»¥ä¿æ¤æ°ç¾PBæ°æ®çå®å ¨ï¼å¹¶æ£è®¡åäºä»å¹´å¹´åºä¹åæ©å±å°å ¶ä»ä¸å个Hadoopé群ä¸ï¼ä»èè¦çeBay ææä¸»è¦Hadoopç10000å¤å°èç¹ã卿们çç产ç¯å¢ä¸ï¼æä»¬å·²é对HDFSãHive çé群ä¸çæ°æ®é ç½®äºä¸äºåºç¡çå®å ¨çç¥ï¼å¹¶å°äºå¹´åºä¹å䏿å¼å ¥æ´å¤ççç¥ï¼ä»¥ç¡®ä¿éè¦æ°æ®çç»å¯¹å®å ¨ãç®åï¼Eagleççç¥æ¶µçå¤ç§æ¨� �å¼ï¼å æ¬ä»è®¿é®æ¨¡å¼ãé¢ç¹è®¿é®æ°æ®éï¼é¢å®ä¹æ¥è¯¢ç±»åãHive 表ååãHBase 表以ååºäºæºå¨å¦ä¹ 模åçæçç¨æ·Profileç¸å ³çææçç¥çãåæ¶ï¼æä»¬ä¹æå¹¿æ³ççç¥æ¥é²æ¢æ°æ®çä¸¢å¤±ãæ°æ®è¢«æ·è´å°ä¸å®å ¨å°ç¹ãæææ°æ®è¢«æªææåºå访é®çãEagleçç¥å®ä¹ä¸æå¤§ççµæ´»æ§åæ©å±æ§ä½¿å¾æä»¬æªæ¥å¯ä»¥è½»æå°ç»§ç»æ©å±æ´å¤æ´å¤æççç¥ä»¥æ¯ææ´å¤å¤å åçç¨ä¾åºæ¯ã</p> -<h2 id="section-1">åç»è®¡å</h2> +<h2 id="åç»è®¡å">åç»è®¡å</h2> <p>è¿å»ä¸¤å¹´ä¸ï¼å¨eBay é¤äºè¢«ç¨äºæ°æ®è¡ä¸ºçæ§ä»¥å¤ï¼Eagle æ ¸å¿æ¡æ¶è¿è¢«å¹¿æ³ç¨äºçæ§èç¹å¥åº·ç¶åµãHadoopåºç¨æ§è½ææ ãHadoop æ ¸å¿æå¡ä»¥åæ´ä¸ªHadoopé群çå¥åº·ç¶åµç诸å¤é¢åãæä»¬è¿å»ºç«ä¸ç³»åçèªå¨åæºå¶ï¼ä¾å¦èç¹ä¿®å¤çï¼å¸®å©æä»¬å¹³å°é¨é¨æå¤§å¾èçäºæä»¬äººå·¥å³åï¼å¹¶ææå°æåäºæ´ä¸ªéç¾¤èµæºå°å©ç¨çã</p> <p>以䏿¯æä»¬ç®åæ£å¨å¼åä¸å°ä¸äºç¹æ§ï¼</p> @@ -272,7 +278,7 @@ Eagle æ¯ææ ¹æ®ç¨æ </li> </ul> -<h2 id="section-2">å ³äºä½è </h2> +<h2 id="å ³äºä½è ">å ³äºä½è </h2> <p><a href="https://github.com/haoch">éæµ©</a>ï¼Apache Eagle Committer å PMC æåï¼eBay åæå¹³å°åºç¡æ¶æé¨é¨é«çº§è½¯ä»¶å·¥ç¨å¸ï¼è´è´£Eagleç产åè®¾è®¡ãææ¯æ¶æãæ ¸å¿å®ç°ä»¥å弿ºç¤¾åºæ¨å¹¿çã</p> <p>æè°¢ä»¥ä¸æ¥èªApache Eagle社åºåeBayå ¬å¸çèåä½è ä»¬å¯¹æ¬æçè´¡ç®ï¼</p> @@ -286,7 +292,7 @@ Eagle æ¯ææ ¹æ®ç¨æ <p>eBay åæå¹³å°åºç¡æ¶æé¨ï¼Analytics Data Infrastructureï¼æ¯eBayçå ¨çæ°æ®ååæåºç¡æ¶æé¨é¨ï¼è´è´£eBay卿°æ®åºãæ°æ®ä»åºãHadoopãå塿ºè½ä»¥åæºå¨å¦ä¹ çåä¸ªæ°æ®å¹³å°å¼åã管çç,æ¯æeBayå ¨çåé¨é¨è¿ç¨é«ç«¯çæ°æ®åæè§£å³æ¹æ¡ä½åºåæ¶ææçä½ä¸å³çï¼ä¸ºéå¸å ¨ççä¸å¡ç¨æ·æä¾æ°æ®åæè§£å³æ¹æ¡ã</p> -<h2 id="section-3">åèèµæ</h2> +<h2 id="åèèµæ">åèèµæ</h2> <ul> <li>Apache Eagle ææ¡£ï¼<a href="http://goeagle.io">http://goeagle.io</a></li> @@ -294,7 +300,7 @@ Eagle æ¯ææ ¹æ®ç¨æ <li>Apache Eagle 项ç®ï¼<a href="http://incubator.apache.org/projects/eagle.html">http://incubator.apache.org/projects/eagle.html</a></li> </ul> -<h2 id="section-4">å¼ç¨é¾æ¥</h2> +<h2 id="å¼ç¨é¾æ¥">å¼ç¨é¾æ¥</h2> <ul> <li><strong>CSDN</strong>: <a href="http://www.csdn.net/article/2015-10-29/2826076">http://www.csdn.net/article/2015-10-29/2826076</a></li> <li><strong>OSCHINA</strong>: <a href="http://www.oschina.net/news/67515/apache-eagle">http://www.oschina.net/news/67515/apache-eagle</a></li> Modified: eagle/site/sup/index.html URL: http://svn.apache.org/viewvc/eagle/site/sup/index.html?rev=1778394&r1=1778393&r2=1778394&view=diff ============================================================================== --- eagle/site/sup/index.html (original) +++ eagle/site/sup/index.html Thu Jan 12 07:44:47 2017 @@ -131,7 +131,7 @@ <h1 class="page-header" style="margin-top: 0px">Apache Eagle Security</h1> <p>The Apache Software Foundation takes a very active stance in eliminating security problems in its software products. Apache Eagle is also responsive to such issues around its features.</p> -<p>If you have any concern regarding to Eagleâs Security or you believe a vulnerability is discovered, donât hesitate to get connected with Aapche Security Team by sending emails to <a href="mailto:security@apache.org">security@apache.org</a>. In the message, you can indicate the project name is Eagle, provide a description of the issue, and you are recommended to give the way of reproducing it. The security team and eagle community will get back to you after assessing the findings.</p> +<p>If you have any concern regarding to Eagleâs Security or you believe a vulnerability is discovered, donât hesitate to get connected with Aapche Security Team by sending emails to <a href="mailto:[email protected]">[email protected]</a>. In the message, you can indicate the project name is Eagle, provide a description of the issue, and you are recommended to give the way of reproducing it. The security team and eagle community will get back to you after assessing the findings.</p> <blockquote> <p><strong>PLEASE PAY ATTENTION</strong> to report any security problem to the security email address before disclosing it publicly.</p>
