[ https://issues.apache.org/jira/browse/AMBARI-8244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14388826#comment-14388826 ]
Hadoop QA commented on AMBARI-8244: ----------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12708454/AMBARI-8244.7.combined.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The test build failed in ambari-server Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/2191//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/2191//console This message is automatically generated. > Ambari HDP 2.0.6+ stacks do not work with fs.defaultFS not being hdfs > --------------------------------------------------------------------- > > Key: AMBARI-8244 > URL: https://issues.apache.org/jira/browse/AMBARI-8244 > Project: Ambari > Issue Type: Bug > Components: stacks > Affects Versions: 2.0.0 > Reporter: Ivan Mitic > Assignee: Ivan Mitic > Labels: HDP > Fix For: 2.1.0 > > Attachments: AMBARI-8244.2.patch, AMBARI-8244.3.patch, > AMBARI-8244.4.patch, AMBARI-8244.5.patch, AMBARI-8244.6.patch, > AMBARI-8244.7.combined.patch, AMBARI-8244.patch > > > Right now changing the default file system does not work with the HDP 2.0.6+ > stacks. Given that it might be common to run HDP against some other file > system in the cloud, adding support for this will be super useful. One > alternative is to consider a separate stack definition for other file > systems, however, given that I noticed just 2 minor bugs needed to support > this, I would rather extend on the existing code. > Bugs: > - One issue is in Nagios install scripts, where it is assumed that > fs.defaultFS has the namenode port number. > - Another issue is in HDFS install scripts, where {{hadoop dfsadmin}} > command only works when hdfs is the default file system. > Fix for both places is to extract the namenode address/port from > {{dfs.namenode.rpc-address}} if one is defined and use it instead of relying > on {{fs.defaultFS}}. > Haven't included any tests yet (my first Ambari patch, not sure what is > appropriate, so please comment). -- This message was sent by Atlassian JIRA (v6.3.4#6332)