[ 
https://issues.apache.org/jira/browse/YARN-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14535954#comment-14535954
 ] 

Ravi Prakash commented on YARN-1519:
------------------------------------

Nitpick: We don't set an upper limit for something we are going to malloc. 
Earlier it was atleast limited to INT_MAX. Now its LONG_MAX. I'd rather keep 
typecasting the long to int.
Otherwise +1. Please change that and I'm happy to commit

> check if sysconf is implemented before using it
> -----------------------------------------------
>
>                 Key: YARN-1519
>                 URL: https://issues.apache.org/jira/browse/YARN-1519
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 3.0.0, 2.3.0
>            Reporter: Radim Kolar
>            Assignee: Radim Kolar
>              Labels: BB2015-05-TBR
>         Attachments: YARN-1519.002.patch, nodemgr-sysconf.txt
>
>
> If sysconf value _SC_GETPW_R_SIZE_MAX is not implemented, it leads to 
> segfault because invalid pointer gets passed to libc function.
> fix: enforce minimum value 1024, same method is used in hadoop-common native 
> code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to