[ https://issues.apache.org/jira/browse/BIGTOP-620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13296028#comment-13296028 ]
Roman Shaposhnik commented on BIGTOP-620: ----------------------------------------- Sorry for the belated reply (Hadoop summit). And sorry for not explaining Shell functionality -- basically you don't need to do sudo once you've created a shell under particular user. IOW, the following: {noformat} sh.exec("sudo -u hdfs hdfs dfsadmin -report"); {noformat} should be {noformat} shHDFS.exec("hdfs dfsadmin -report"); {noformat} but the shell should be create like this: {noformat} static Shell shHDFS = new Shell("/bin/bash", "hdfs"); {noformat} To answer your earlier question: the tests can't assume that they run on the NameNode, everything else should be fine. At some point we might need to introduce a feature that would let us classify the tests into buckets that are potentially destructive to the environment and the safe ones. But this is an open question of how to do easy-to-use testsuite creation. > Add test for dfsadmin commands > ------------------------------ > > Key: BIGTOP-620 > URL: https://issues.apache.org/jira/browse/BIGTOP-620 > Project: Bigtop > Issue Type: Test > Affects Versions: 0.4.0 > Reporter: Sujay Rau > Fix For: 0.4.0 > > Attachments: bigtop-620.patch > > > Wrote a test that goes through the "hdfs dfsadmin -???" commands and makes > sure no errors are returned as well as checking some functionality. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira