[ 
https://issues.apache.org/jira/browse/YARN-882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120014#comment-16120014
 ] 

Rostislaw Krassow commented on YARN-882:
----------------------------------------

I got the same issue in production. During execution of a heavy hive join (with 
mapreduce execution join) the according 
$yarn.nodemanager.local-dirs/usercache/<user>/appcache/<app_id> grow up. This 
led to elimination of the nodes by RM. 

The quotas for private/application cache should reflect resource quotas for the 
defined YARN queues.

> Specify per user quota for private/application cache and user log files
> -----------------------------------------------------------------------
>
>                 Key: YARN-882
>                 URL: https://issues.apache.org/jira/browse/YARN-882
>             Project: Hadoop YARN
>          Issue Type: New Feature
>            Reporter: Omkar Vinit Joshi
>            Assignee: Omkar Vinit Joshi
>
> At present there is no limit on the number of files / size of the files 
> localized by single user. Similarly there is no limit on the size of the log 
> files created by user via running containers.
> We need to restrict the user for this.
> For LocalizedResources; this has serious concerns in case of secured 
> environment where malicious user can start one container and localize 
> resources whose total size >= DEFAULT_NM_LOCALIZER_CACHE_TARGET_SIZE_MB. 
> Thereafter it will either fail (if no extra space is present on disk) or 
> deletion service will keep removing localized files for other 
> containers/applications. 
> The limit for logs/localized resources should be decided by RM and sent to NM 
> via secured containerToken. All these configurations should per container 
> instead of per user or per nm.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to