[
https://issues.apache.org/jira/browse/HDFS-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500496#comment-13500496
]
Andy Isaacson commented on HDFS-4178:
-------------------------------------
bq. (I was under the misimpression that fd 0-2 were reserved unless explicit
opened, or that they were redirected to /dev/null after dropping the
controlling terminal)
Yeah, it's really surprising that this pitfall is left lurking for people to
stumble into! There's no credible use case for leaving fd 0,1,2 closed during
process startup and it would be a huge win for {{_start}} to open {{/dev/null}}
as appropriate before running {{main()}}, but unfortunately I've confirmed that
this is not done and I actually did experimentally trigger the "glibc detected
bad free"-in-my-datafile failure mode under glibc 2.13 running with {{at}}.
Thanks for committing the fix!
> shell scripts should not close stderr
> -------------------------------------
>
> Key: HDFS-4178
> URL: https://issues.apache.org/jira/browse/HDFS-4178
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: scripts
> Affects Versions: 2.0.2-alpha
> Reporter: Andy Isaacson
> Assignee: Andy Isaacson
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs4178.txt
>
>
> The {{start-dfs.sh}} and {{stop-dfs.sh}} scripts close stderr for some
> subprocesses using the construct
> bq. {{2>&-}}
> This is dangerous because child processes started up under this scenario will
> re-use filedescriptor 2 for opened files. Since libc and many other
> codepaths assume that filedescriptor 2 can be written to in error conditions,
> this can potentially result in data corruption.
> Much better to redirect stderr using the construct {{2>/dev/null}}.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira