[ 
https://issues.apache.org/jira/browse/HADOOP-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12664321#action_12664321
 ] 

Allen Wittenauer commented on HADOOP-5059:
------------------------------------------

That assumes you have an OS that supports overcommit.  Most don't, and 
rightfully so, as it causes memory management from an operations perspective to 
be wildly unpredictable.  Thus our issues.

Hadoop needs to do two things:

a) Move the topology program to be run as a completely separate daemon and open 
a socket to talk to it over the loopback interface.  This trick has been used 
by squid (unlinkd) and many other applications quite effectively to offload all 
of the forking.  

b) Stop forking for things it should be doing via a native method rather than 
putting reliances upon external applications being in certain locations or, 
worse, using a completely untrusted path.  The output of programs are not 
guaranteed to be stable, even in POSIX-land.

As a sidenote to a, Owen has mentioned moving the topology program to be a Java 
loadable class.  AFAIK, this won't work in the real world, as it means that in 
order to change the topology on the fly, we have to restart the namenode.  Or 
worse, you'll need to get your admin team to learn Java. ;)

> 'whoami', 'topologyscript' calls failing with java.io.IOException: error=12, 
> Cannot allocate memory
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5059
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5059
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: util
>         Environment: On nodes with 
> physical memory 32G
> Swap 16G 
> Primary/Secondary Namenode using 25G of heap or more
>            Reporter: Koji Noguchi
>         Attachments: TestSysCall.java
>
>
> We've seen primary/secondary namenodes fail when calling whoami or 
> topologyscripts.
> (Discussed as part of HADOOP-4998)
> Sample stack traces.
> Primary Namenode
> {noformat}
> 2009-01-12 03:57:27,381 WARN org.apache.hadoop.net.ScriptBasedMapping: 
> java.io.IOException: Cannot run program
> "/path/topologyProgram" (in directory "/path"):
> java.io.IOException: error=12, Cannot allocate memory
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
>         at org.apache.hadoop.util.Shell.run(Shell.java:134)
>         at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286)
>         at 
> org.apache.hadoop.net.ScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:122)
>         at 
> org.apache.hadoop.net.ScriptBasedMapping.resolve(ScriptBasedMapping.java:73)
>         at 
> org.apache.hadoop.dfs.FSNamesystem$ResolutionMonitor.run(FSNamesystem.java:1869)
>         at java.lang.Thread.run(Thread.java:619)
> Caused by: java.io.IOException: java.io.IOException: error=12, Cannot 
> allocate memory
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>         at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
>         ... 7 more
> 2009-01-12 03:57:27,381 ERROR org.apache.hadoop.fs.FSNamesystem: The resolve 
> call returned null! Using /default-rack
> for some hosts
> 2009-01-12 03:57:27,381 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
> new node: /default-rack/55.5.55.55:50010
> {noformat}
> Secondary Namenode
> {noformat}
> 2008-10-09 02:00:58,288 ERROR org.apache.hadoop.dfs.NameNode.Secondary: 
> java.io.IOException:
> javax.security.auth.login.LoginException: Login failed: Cannot run program 
> "whoami": java.io.IOException:
> error=12, Cannot allocate memory
>         at 
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)
>         at 
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)
>         at 
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:257)
>         at 
> org.apache.hadoop.dfs.FSNamesystem.setConfigurationParameters(FSNamesystem.java:370)
>         at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:359)
>         at 
> org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:340)
>         at 
> org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:312)
>         at 
> org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
>         at java.lang.Thread.run(Thread.java:619)
>         at 
> org.apache.hadoop.dfs.FSNamesystem.setConfigurationParameters(FSNamesystem.java:372)
>         at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:359)
>         at 
> org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:340)
>         at 
> org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:312)
>         at 
> org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
>         at java.lang.Thread.run(Thread.java:619)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to