[ 
https://issues.apache.org/jira/browse/YARN-492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13607819#comment-13607819
 ] 

Hitesh Shah commented on YARN-492:
----------------------------------

Nevermind, 50010 is the default datanode port. What process is opening up 
44871? If it is the node manager, do you have log aggregation enabled? Could 
you try running the test with log aggregation disabled and let us know if the 
problem is still reproducible? 
                
> Too many open files error to launch a container
> -----------------------------------------------
>
>                 Key: YARN-492
>                 URL: https://issues.apache.org/jira/browse/YARN-492
>             Project: Hadoop YARN
>          Issue Type: Bug
>    Affects Versions: 2.0.0-alpha
>         Environment: RedHat Linux
>            Reporter: Krishna Kishore Bonagiri
>
> I am running a date command with YARN's distributed shell example in a loop 
> of 1000 times in this way:
> yarn jar 
> /home/kbonagir/yarn/hadoop-2.0.0-alpha/share/hadoop/mapreduce/hadoop-yarn-applications-distributedshell-2.0.0-alpha.jar
>  org.apache.hadoop.yarn.applications.distributedshell.Client --jar 
> /home/kbonagir/yarn/hadoop-2.0.0-alpha/share/hadoop/mapreduce/hadoop-yarn-applications-distributedshell-2.0.0-alpha.jar
>  --shell_command date --num_containers 2
> Around 730th time or so, I am getting an error in node manager's log saying 
> that it failed to launch container because there are "Too many open files" 
> and when I observe through lsof command,I find that there is one instance of 
> this kind of file is left for each run of Application Master, and it kept 
> growing as I am running it in loop.
> node1:44871->node1:50010
> Thanks,
> Kishore

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to