[ 
https://issues.apache.org/jira/browse/YARN-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536754#comment-14536754
 ] 

Eric Payne commented on YARN-1519:
----------------------------------

Hi [~raviprak]. Thank you for reviewing this issue.
{quote}
Nitpick: We don't set an upper limit for something we are going to malloc. 
Earlier it was atleast limited to INT_MAX. Now its LONG_MAX. I'd rather keep 
typecasting the long to int.
{quote}
[~hsn], do you want to make that change or should I?

> check if sysconf is implemented before using it
> -----------------------------------------------
>
>                 Key: YARN-1519
>                 URL: https://issues.apache.org/jira/browse/YARN-1519
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 3.0.0, 2.3.0
>            Reporter: Radim Kolar
>            Assignee: Radim Kolar
>              Labels: BB2015-05-TBR
>         Attachments: YARN-1519.002.patch, nodemgr-sysconf.txt
>
>
> If sysconf value _SC_GETPW_R_SIZE_MAX is not implemented, it leads to 
> segfault because invalid pointer gets passed to libc function.
> fix: enforce minimum value 1024, same method is used in hadoop-common native 
> code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to