[ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345503#comment-16345503
 ] 

Daniel Templeton commented on YARN-7528:
----------------------------------------

If you do not specify a maximum value for a resource, it's set internally to 
{{Long.MAX_VALUE}}.  When a job is submitted, we check the resource request 
against the maximum for the resource, which will require a units conversion.  
Hmmm...  Now that I write that out loud, it occurs to me that we should be 
converting the requested value into the default units, not the other way 
around, which would be why you're not seeing an issue.  I'll need to look at 
the code again.  Even if that's the case, you should still be able to cause the 
failure by requesting -Dmapreduce.map.resource.gpu=92233720368547k with your 
setup above.

> Resource types that use units need to be defined at RM level and NM level or 
> when using small units you will overflow max_allocation calculation
> ------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: YARN-7528
>                 URL: https://issues.apache.org/jira/browse/YARN-7528
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: documentation, resourcemanager
>    Affects Versions: 3.0.0
>            Reporter: Grant Sohn
>            Assignee: Szilard Nemeth
>            Priority: Major
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to