[ 
https://issues.apache.org/jira/browse/AMBARI-18043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409342#comment-15409342
 ] 

Andrew Onischuk commented on AMBARI-18043:
------------------------------------------

The above is caused by longstanding Hadoop QA issue.

Posting the tests results:
{noformat}
[INFO] Rat check: Summary of files. Unapproved: 0 unknown: 0 generated: 0 
approved: 148 licence.
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Ambari Views ...................................... SUCCESS [2.155s]
[INFO] Ambari Metrics Common ............................. SUCCESS [1.173s]
[INFO] Ambari Server ..................................... SUCCESS [1:01.716s]
[INFO] Ambari Agent ...................................... SUCCESS [11.630s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:17.446s
[INFO] Finished at: Fri Aug 05 13:56:54 EEST 2016
[INFO] Final Memory: 70M/1096M
[INFO] ------------------------------------------------------------------------
{noformat}

> Not able to proceed with RU downgrade due to spark clients failing to install
> -----------------------------------------------------------------------------
>
>                 Key: AMBARI-18043
>                 URL: https://issues.apache.org/jira/browse/AMBARI-18043
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Andrew Onischuk
>             Fix For: 2.4.0
>
>         Attachments: AMBARI-18043.patch
>
>
> RU downgrade is stuck at step to restart spark clients.
> Restart of spark clients is failing with below error :
>     
>     
>     
>     Traceback (most recent call last):
>       File 
> "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/spark_client.py",
>  line 88, in <module>
>         SparkClient().execute()
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
>         method(env)
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 676, in restart
>         self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>       File 
> "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/spark_client.py",
>  line 79, in pre_upgrade_restart
>         import params
>       File 
> "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/params.py",
>  line 212, in <module>
>         livy_pid_dir = status_params.livy_pid_dir
>     AttributeError: 'module' object has no attribute 'livy_pid_dir'
>     
> Not able to proceed the downgrade even with 'Ignore and proceed'
> functionality.
> Live cluster is available here : <http://172.22.115.32:8080/>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to