[
https://issues.apache.org/jira/browse/HDFS-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13495906#comment-13495906
]
Jason Lowe commented on HDFS-4178:
----------------------------------
The data corruption occurs when the process opens a new file descriptor for its
data channel and gets file descriptor 2 (since it would be the next available
due to the prior close). If code in libc and elsewhere is blindly assuming it
can write messages to fd 2 and fd 2 is your custom data stream then that's data
corruption.
> shell scripts should not close stderr
> -------------------------------------
>
> Key: HDFS-4178
> URL: https://issues.apache.org/jira/browse/HDFS-4178
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: scripts
> Affects Versions: 2.0.2-alpha
> Reporter: Andy Isaacson
> Assignee: Andy Isaacson
> Attachments: hdfs4178.txt
>
>
> The {{start-dfs.sh}} and {{stop-dfs.sh}} scripts close stderr for some
> subprocesses using the construct
> bq. {{2>&-}}
> This is dangerous because child processes started up under this scenario will
> re-use filedescriptor 2 for opened files. Since libc and many other
> codepaths assume that filedescriptor 2 can be written to in error conditions,
> this can potentially result in data corruption.
> Much better to redirect stderr using the construct {{2>/dev/null}}.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira