[ 
https://issues.apache.org/jira/browse/AMBARI-14241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044758#comment-15044758
 ] 

Hadoop QA commented on AMBARI-14241:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12776054/AMBARI-14241.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in 
ambari-server:

                  org.apache.ambari.server.state.ConfigHelperTest
                  org.apache.ambari.server.orm.dao.RequestDAOTest
                  org.apache.ambari.server.orm.dao.AlertDispatchDAOTest
                  org.apache.ambari.server.orm.dao.AlertDefinitionDAOTest

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/4508//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/4508//console

This message is automatically generated.

> RU on non-HDFS filesystems, native commands like hdfs dfsadmin fail
> -------------------------------------------------------------------
>
>                 Key: AMBARI-14241
>                 URL: https://issues.apache.org/jira/browse/AMBARI-14241
>             Project: Ambari
>          Issue Type: Bug
>          Components: stacks
>    Affects Versions: 2.2.0
>            Reporter: Jayush Luniya
>            Assignee: Jayush Luniya
>            Priority: Blocker
>             Fix For: 2.2.0
>
>         Attachments: AMBARI-14241.patch
>
>
> *Issue*
> {code}
> 2015-12-02 19:00:50,698 - 
> Execute['/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin 
> -rollingUpgrade prepare'] {'logoutput': True, 'user': 'hdfs'}
> rollingUpgrade: FileSystem wasb://hostname is not an HDFS file system
> Usage: hdfs dfsadmin [-rollingUpgrade [<query|prepare|finalize>]]
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py",
>  line 432, in <module>
>     NameNode().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 217, in execute
>     method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py",
>  line 175, in prepare_rolling_upgrade
>     namenode_upgrade.prepare_rolling_upgrade(hfds_binary)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode_upgrade.py",
>  line 240, in prepare_rolling_upgrade
>     logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 154, in __init__
>     self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 158, in run
>     self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 121, in run_action
>     provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 238, in action_run
>     tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
>     result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
>     tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
>     result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
>     raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 
> '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -rollingUpgrade 
> prepare' returned 255. rollingUpgrade: FileSystem wasb://hostname is not an 
> HDFS file system
> Usage: hdfs dfsadmin [-rollingUpgrade [<query|prepare|finalize>]]
> {code}
> *Fix:*
> To fix this issue  we need to explicitly pass "-fs" argument to the all hdfs 
> dfsadmin commands in the files: journalnode_upgrade.py, namenode_upgrade.py, 
> datanode_upgrade.py, namenode.py
> Example: hdfs dfsadmin -fs hdfs://mycluster -rollingUpgrade prepare



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to