[jira] [Created] (YARN-8865) RMStateStore contains large number of expired RMDelegationToken

2018-10-09 Thread Wilfred Spiegelenburg (JIRA)
Wilfred Spiegelenburg created YARN-8865:
---

 Summary: RMStateStore contains large number of expired 
RMDelegationToken
 Key: YARN-8865
 URL: https://issues.apache.org/jira/browse/YARN-8865
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.1.0
Reporter: Wilfred Spiegelenburg
Assignee: Wilfred Spiegelenburg


When the RM state store is restored expired delegation tokens are restored and 
added to the system. These expired tokens do not get cleaned up or removed. The 
exact reason why the tokens are still in the store is not clear. We have seen 
as many as 250,000 tokens in the store some of which were 2 years old.

This has two side effects:
* for the zookeeper store this leads to a jute buffer exhaustion issue and 
prevents the RM from becoming active.
* restore takes longer than needed and heap usage is higher than it should be

We should not restore already expired tokens since they cannot be renewed or 
used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8468) Enable the use of queue based maximum container allocation limit and implement it in FairScheduler

2018-10-09 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved YARN-8468.
---
Resolution: Fixed

> Enable the use of queue based maximum container allocation limit and 
> implement it in FairScheduler
> --
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8468-branch-3.1.018.patch, 
> YARN-8468-branch-3.1.019.patch, YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch, YARN-8468.004.patch, 
> YARN-8468.005.patch, YARN-8468.006.patch, YARN-8468.007.patch, 
> YARN-8468.008.patch, YARN-8468.009.patch, YARN-8468.010.patch, 
> YARN-8468.011.patch, YARN-8468.012.patch, YARN-8468.013.patch, 
> YARN-8468.014.patch, YARN-8468.015.patch, YARN-8468.016.patch, 
> YARN-8468.017.patch, YARN-8468.018.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
> The goal of this ticket is to allow this value to be set on a per queue basis.
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
> Suggested solution:
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability(String queueName) in both 
> FSParentQueue and FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * Enforce the use of queue based maximum allocation limit if it is 
> available, if not use the general scheduler level setting
>  ** Use it during validation and normalization of requests in 
> scheduler.allocate, app submit and resource request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8864) NM incorrectly logs container user as the user who sent a stop container request in its audit log

2018-10-09 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-8864:


 Summary: NM incorrectly logs container user as the user who sent a 
stop container request in its audit log
 Key: YARN-8864
 URL: https://issues.apache.org/jira/browse/YARN-8864
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.2.0
Reporter: Haibo Chen


As in  ContainerManagerImpl.java

    

protected void stopContainerInternal(ContainerId containerID)
  throws YarnException, IOException {
    ...
 
  NMAuditLogger.logSuccess(container.getUser(),    
    AuditConstants.STOP_CONTAINER, "ContainerManageImpl", containerID
  .getApplicationAttemptId().getApplicationId(), containerID);
    }
  }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8863) Define yarn node manager local dirs in container-executor.cfg

2018-10-09 Thread Eric Yang (JIRA)
Eric Yang created YARN-8863:
---

 Summary: Define yarn node manager local dirs in 
container-executor.cfg
 Key: YARN-8863
 URL: https://issues.apache.org/jira/browse/YARN-8863
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: security, yarn
Reporter: Eric Yang


The current implementation of container-executor accepts nm-local-dirs and 
nm-log-dirs from cli arguments.  If yarn user is compromised, it is possible 
for rogue yarn user to use container-executor to point nm-local-dirs to user 
home directory to make modification to user owned files.  This JIRA is to 
enhance container-executor.cfg to allow specification of 
yarn.nodemanager.local-dirs to safe guard rogue yarn user from exploiting 
nm-local-dirs paths.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8862) [GPG] add Yarn Registry cleanup in ApplicationCleaner

2018-10-09 Thread Botong Huang (JIRA)
Botong Huang created YARN-8862:
--

 Summary: [GPG] add Yarn Registry cleanup in ApplicationCleaner
 Key: YARN-8862
 URL: https://issues.apache.org/jira/browse/YARN-8862
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Botong Huang
Assignee: Botong Huang


In Yarn Federation, we use Yarn Registry to use the AMToken for UAMs in 
secondary sub-clusters. Because of potential more app attempts later, AMRMProxy 
cannot kill the UAM and delete the tokens when one local attempt finishes. So 
similar to the StateStore application table, we need ApplicationCleaner in GPG 
to cleanup the finished app entries in Yarn Registry. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8861) executorLock is misleading in ContainerLaunch

2018-10-09 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-8861:
---

 Summary: executorLock is misleading in ContainerLaunch
 Key: YARN-8861
 URL: https://issues.apache.org/jira/browse/YARN-8861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Chandni Singh
Assignee: Chandni Singh






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-09 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/

[Oct 8, 2018 6:24:56 AM] (sunilg) YARN-7825. [UI2] Maintain constant horizontal 
application info bar for
[Oct 8, 2018 2:17:42 PM] (elek) HDDS-521. Implement DeleteBucket REST endpoint. 
Contributed by Bharat
[Oct 8, 2018 4:40:37 PM] (haibochen) YARN-8659. RMWebServices returns only 
RUNNING apps when filtered with
[Oct 8, 2018 5:05:18 PM] (inigoiri) YARN-8843. updateNodeResource does not 
support units for memory.
[Oct 8, 2018 5:56:47 PM] (eyang) YARN-8763.  Added node manager websocket API 
for accessing containers.  




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/921/artifact/out/diff-javadoc-javadoc-root.t

[jira] [Created] (YARN-8860) Federation client intercepter class contains unwanted character

2018-10-09 Thread Rakesh Shah (JIRA)
Rakesh Shah created YARN-8860:
-

 Summary: Federation client intercepter class contains unwanted 
character
 Key: YARN-8860
 URL: https://issues.apache.org/jira/browse/YARN-8860
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Rakesh Shah


{{*The FederationClientIntercepter class contains some unwanted character 
inside summary part of the methods.*}}

 

 
{noformat}
- * The Client submits an application to the Router. 鈥?The Router selects one
- * SubCluster to forward the request. 鈥?The Router inserts a tuple into
- * StateStore with the selected SubCluster (e.g. SC1) and the appId. 鈥?The
- * State Store replies with the selected SubCluster (e.g. SC1). 鈥?The Router
+ * The Client submits an application to the Router. 閳ワ拷 The Router selects one
+ * SubCluster to forward the request. 閳ワ拷 The Router inserts a tuple into
+ * StateStore with the selected SubCluster (e.g. SC1) and the appId. 閳ワ拷 The
+ * State Store replies with the selected SubCluster (e.g. SC1). 閳ワ拷 The 
Router{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org