[ https://issues.apache.org/jira/browse/HDFS-1150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12867745#action_12867745 ]
Allen Wittenauer commented on HDFS-1150: ---------------------------------------- More problems: - hadoop shell script doesn't check to see if jsvc is present and executable - hadoop shell script is hard-coded to launch the datanode process as user 'hdfs' - hadoop shell script is hard-coded to use /dev/stderr and /dev/stdout which seems like a bad idea if something prevents the jsvc code from working properly I'm really confused as to how this is actually supposed to work in real world usage: - It appears the intention is that the hadoop command is going to be run by root based upon checking $EUID. This means a lot more is getting executed as root than just the java process. Why aren't we just re-using the mapred setuid code to launch the datanode process rather than having this dependency? - This doesn't look like it will work with start-mapred.sh/start-all.sh unless those are also run as root. > Verify datanodes' identities to clients in secure clusters > ---------------------------------------------------------- > > Key: HDFS-1150 > URL: https://issues.apache.org/jira/browse/HDFS-1150 > Project: Hadoop HDFS > Issue Type: New Feature > Components: data-node > Affects Versions: 0.22.0 > Reporter: Jakob Homan > Assignee: Jakob Homan > Attachments: HDFS-1150-y20.build-script.patch, > HDFS-1150-Y20S-ready-5.patch, HDFS-1150-Y20S-ready-6.patch, > HDFS-1150-Y20S-ready-7.patch, HDFS-1150-Y20S-ready-8.patch, > HDFS-1150-Y20S-Rough-2.patch, HDFS-1150-Y20S-Rough-3.patch, > HDFS-1150-Y20S-Rough-4.patch, HDFS-1150-Y20S-Rough.txt > > > Currently we use block access tokens to allow datanodes to verify clients' > identities, however we don't have a way for clients to verify the > authenticity of the datanodes themselves. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.