[ 
https://issues.apache.org/jira/browse/YARN-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14105332#comment-14105332
 ] 

Amir Mal commented on YARN-2435:
--------------------------------

h2. my test cluster configuration files:


h3. core-ste.xml:
{code:xml}
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://htc2n1:8020</value>
  </property>
</configuration>
{code}


h3. hdfs-site.xml:
{code:xml}
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    
<value>file:///grid/0/hadoop/hdfs/namenode,file:///grid/1/hadoop/hdfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    
<value>file:///grid/1/hadoop/hdfs/data,file:///grid/2/hadoop/hdfs/data</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    
<value>file:///grid/0/hadoop/hdfs/namesecondary,file:///grid/1/hadoop/hdfs/namesecondary</value>
  </property>
  <property>
    <name>dfs.permissions.superusergroup</name>
    <value>hdfsadmin</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>htc2n2:50090</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>htc2n1:50070</value>
  </property>
</configuration>
{code}


h3. yarn-site.xml:
{code:xml}
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
   
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>htc2n2</value>
  </property>
</configuration>
{code}

h3. mapred-site.xml
{code:xml}
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>
{code}


h3. capacity-scheduler.xml:
{code:xml}
<configuration>

  <property>
    <name>yarn.scheduler.capacity.maximum-applications</name>
    <value>10000</value>
    <description>
      Maximum number of applications that can be pending and running.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
    <value>0.1</value>
    <description>
      Maximum percent of resources in the cluster which can be used to run 
      application masters i.e. controls number of concurrent running
      applications.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.resource-calculator</name>
    
<value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
    <description>
      The ResourceCalculator implementation to be used to compare 
      Resources in the scheduler.
      The default i.e. DefaultResourceCalculator only uses Memory while
      DominantResourceCalculator uses dominant-resource to compare 
      multi-dimensional resources such as Memory, CPU etc.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.queues</name>
    <value>default</value>
    <description>
      The queues at the this level (root is the root queue).
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.capacity</name>
    <value>100</value>
    <description>Default queue target capacity.</description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.user-limit-factor</name>
    <value>1</value>
    <description>
      Default queue user limit a percentage from 0.0 to 1.0.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
    <value>100</value>
    <description>
      The maximum capacity of the default queue. 
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.state</name>
    <value>RUNNING</value>
    <description>
      The state of the default queue. State can be one of RUNNING or STOPPED.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.acl_submit_applications</name>
    <value> group1</value>
    <description>
      The ACL of who can submit jobs to the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.acl_administer_queue</name>
    <value> group1</value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
    <value> group1</value>
    <description>
      The ACL of who can submit jobs to the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
    <value> group1</value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.node-locality-delay</name>
    <value>40</value>
    <description>
      Number of missed scheduling opportunities after which the 
CapacityScheduler 
      attempts to schedule rack-local containers. 
      Typically this should be set to number of nodes in the cluster, By 
default is setting 
      approximately number of nodes in one rack which is 40.
    </description>
  </property>

</configuration>
{code}

> Capacity scheduler should only allow Kill Application Requests from 
> ADMINISTER_QUEUE users
> ------------------------------------------------------------------------------------------
>
>                 Key: YARN-2435
>                 URL: https://issues.apache.org/jira/browse/YARN-2435
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>    Affects Versions: 2.5.0, 2.4.1
>         Environment: [root@htc2n1 ~]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 6.4 (Santiago)
> [root@htc2n1 ~]# uname -a
> Linux htc2n3.....com 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 
> 2013 x86_64 x86_64 x86_64 GNU/Linux
> [root@htc2n1 ~]# $JAVA_HOME/bin/java -version
> java version "1.7.0_55"
> OpenJDK Runtime Environment (rhel-2.4.7.1.el6_5-x86_64 u55-b13)
> OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)
>            Reporter: Amir Mal
>
> a user without ADMINISTER_QUEUE privilege can kill application from all 
> queues.
> to replicate the bug:
> 1) install cluster with {{yarn.resourcemanager.scheduler.class}} set to 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.*CapacityScheduler*
> 2) created 2 users (user1, user2) each belong to a separate group (group1, 
> group2)
> 3) set {{acl_submit_applications}} and {{acl_administer_queue}} of the 
> {{root}} and {{root.default}} queues to group1
> 4) submit job to {{default}} queue by user1
> {quote}
> [user1@htc2n3 ~]$ mapred  queue -showacls
> ...
> Queue acls for user :  user1
> Queue  Operations
> =====================
> root  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
> default  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
> [user1@htc2n3 ~]$ yarn  jar 
> /opt/apache/hadoop-2.5.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
>  pi -Dmapreduce.job.queuename=default 4 1000000000
> {quote}
> 5) kill the application by user2
> {quote}
> [user2@htc2n4 ~]$ mapred  queue -showacls
> ...
> Queue acls for user :  user2
> Queue  Operations
> =====================
> root
> default
> [user2@htc2n4 ~]$ yarn application -kill application_1408540602935_0004
> ...
> Killing application application_1408540602935_0004
> 14/08/21 14:37:54 INFO impl.YarnClientImpl: Killed application 
> application_1408540602935_0004
> {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to