[
https://issues.apache.org/jira/browse/HADOOP-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12664746#action_12664746
]
Doug Cutting commented on HADOOP-5059:
--------------------------------------
> why is it a bad idea for Java to use vfork()?
vfork() is very fragile, since, until a call to exec is made, the new process
runs in the same memory as its parent, including the stack, etc. The parent
process is also suspended until exec() is called, but, still, the child can
easily wreak havoc.
https://www.securecoding.cert.org/confluence/display/seccode/POS33-C.+Do+not+use+vfork()
That said, it seems like folks do still use vfork() to get around this problem,
e.g.:
http://bugs.sun.com/view_bug.do?bug_id=5049299
http://sources.redhat.com/ml/glibc-bugs/2004-09/msg00045.html
> 'whoami', 'topologyscript' calls failing with java.io.IOException: error=12,
> Cannot allocate memory
> ---------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5059
> URL: https://issues.apache.org/jira/browse/HADOOP-5059
> Project: Hadoop Core
> Issue Type: Bug
> Components: util
> Environment: On nodes with
> physical memory 32G
> Swap 16G
> Primary/Secondary Namenode using 25G of heap or more
> Reporter: Koji Noguchi
> Attachments: TestSysCall.java
>
>
> We've seen primary/secondary namenodes fail when calling whoami or
> topologyscripts.
> (Discussed as part of HADOOP-4998)
> Sample stack traces.
> Primary Namenode
> {noformat}
> 2009-01-12 03:57:27,381 WARN org.apache.hadoop.net.ScriptBasedMapping:
> java.io.IOException: Cannot run program
> "/path/topologyProgram" (in directory "/path"):
> java.io.IOException: error=12, Cannot allocate memory
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
> at org.apache.hadoop.util.Shell.run(Shell.java:134)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286)
> at
> org.apache.hadoop.net.ScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:122)
> at
> org.apache.hadoop.net.ScriptBasedMapping.resolve(ScriptBasedMapping.java:73)
> at
> org.apache.hadoop.dfs.FSNamesystem$ResolutionMonitor.run(FSNamesystem.java:1869)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: java.io.IOException: java.io.IOException: error=12, Cannot
> allocate memory
> at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
> at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
> ... 7 more
> 2009-01-12 03:57:27,381 ERROR org.apache.hadoop.fs.FSNamesystem: The resolve
> call returned null! Using /default-rack
> for some hosts
> 2009-01-12 03:57:27,381 INFO org.apache.hadoop.net.NetworkTopology: Adding a
> new node: /default-rack/55.5.55.55:50010
> {noformat}
> Secondary Namenode
> {noformat}
> 2008-10-09 02:00:58,288 ERROR org.apache.hadoop.dfs.NameNode.Secondary:
> java.io.IOException:
> javax.security.auth.login.LoginException: Login failed: Cannot run program
> "whoami": java.io.IOException:
> error=12, Cannot allocate memory
> at
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)
> at
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)
> at
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:257)
> at
> org.apache.hadoop.dfs.FSNamesystem.setConfigurationParameters(FSNamesystem.java:370)
> at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:359)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:340)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:312)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
> at java.lang.Thread.run(Thread.java:619)
> at
> org.apache.hadoop.dfs.FSNamesystem.setConfigurationParameters(FSNamesystem.java:372)
> at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:359)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:340)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:312)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
> at java.lang.Thread.run(Thread.java:619)
> {noformat}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.