[ 
https://issues.apache.org/jira/browse/AMBARI-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-13548:
-----------------------------------------
             Assignee: Alejandro Fernandez
    Affects Version/s: 2.1.3
                       2.2.0
        Fix Version/s: 2.1.3
                       2.2.0
          Description: 
Get following error during express upgrade.

{code}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py",
 line 401, in <module>
    NameNode().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 223, in execute
    method(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 506, in restart
    self.start(env, upgrade_type=upgrade_type)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py",
 line 99, in start
    namenode(action="start", hdfs_binary=hdfs_binary, 
upgrade_type=upgrade_type, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", 
line 89, in thunk
    return fn(*args, **kwargs)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py",
 line 185, in namenode
    create_hdfs_directories(is_active_namenode_cmd)
  File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py",
 line 248, in create_hdfs_directories
    only_if=check
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 154, in __init__
    self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 154, in run
    self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 120, in run_action
    provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 396, in action_create_on_execute
    self.action_delayed("create")
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 393, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 248, in action_delayed
    self._create_resource()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 258, in _create_resource
    self._create_directory(self.main_resource.resource.target)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 282, in _create_directory
    self.util.run_command(target, 'MKDIRS', method='PUT')
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 203, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
'%{http_code}' -X PUT 
'http://jay-trunk-1.c.pramod-thangali.internal:50070/webhdfs/v1/tmp?op=MKDIRS&user.name=hdfs''
 returned status_code=403. 
{
  "RemoteException": {
    "exception": "SafeModeException", 
    "javaClassName": 
"org.apache.hadoop.hdfs.server.namenode.SafeModeException", 
    "message": "Cannot create directory /tmp. Name node is in safe mode.\nThe 
reported blocks 0 needs additional 7 blocks to reach the threshold 1.0000 of 
total blocks 6.\nThe number of live datanodes 0 has reached the minimum number 
0. Safe mode will be turned off automatically once the thresholds have been 
reached."
  }
}
{code}

It appears that the HdfsResource call no longer skips the task when the only_if 
condition is set to None.

For the NonRolling Upgrade Packs, change the title of the groups,
Rename the high-level and low-level to be this…
- Stop Components for High-Level Services
- Stop Components for Core Services (HDFS, HBase, ZooKeeper and Ranger)

Rename these others…
- Zookeeper => ZooKeeper
- MR and YARN => YARN and MapReduce2 
- Take Backups =>Perform Backups
- Update Desired Stack Id => Update Target Stack

          Component/s: ambari-server
           Issue Type: Story  (was: Bug)

> Stop-and-Start Upgrade: NameNode restart fails since HdfsResource only_if 
> condition not working, change title of Groups
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-13548
>                 URL: https://issues.apache.org/jira/browse/AMBARI-13548
>             Project: Ambari
>          Issue Type: Story
>          Components: ambari-server
>    Affects Versions: 2.2.0, 2.1.3
>            Reporter: Alejandro Fernandez
>            Assignee: Alejandro Fernandez
>             Fix For: 2.2.0, 2.1.3
>
>
> Get following error during express upgrade.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py",
>  line 401, in <module>
>     NameNode().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 223, in execute
>     method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 506, in restart
>     self.start(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py",
>  line 99, in start
>     namenode(action="start", hdfs_binary=hdfs_binary, 
> upgrade_type=upgrade_type, env=env)
>   File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", 
> line 89, in thunk
>     return fn(*args, **kwargs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py",
>  line 185, in namenode
>     create_hdfs_directories(is_active_namenode_cmd)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py",
>  line 248, in create_hdfs_directories
>     only_if=check
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 154, in __init__
>     self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 154, in run
>     self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 120, in run_action
>     provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 396, in action_create_on_execute
>     self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 393, in action_delayed
>     self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 248, in action_delayed
>     self._create_resource()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 258, in _create_resource
>     self._create_directory(self.main_resource.resource.target)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 282, in _create_directory
>     self.util.run_command(target, 'MKDIRS', method='PUT')
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 203, in run_command
>     raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT 
> 'http://jay-trunk-1.c.pramod-thangali.internal:50070/webhdfs/v1/tmp?op=MKDIRS&user.name=hdfs''
>  returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SafeModeException", 
>     "javaClassName": 
> "org.apache.hadoop.hdfs.server.namenode.SafeModeException", 
>     "message": "Cannot create directory /tmp. Name node is in safe mode.\nThe 
> reported blocks 0 needs additional 7 blocks to reach the threshold 1.0000 of 
> total blocks 6.\nThe number of live datanodes 0 has reached the minimum 
> number 0. Safe mode will be turned off automatically once the thresholds have 
> been reached."
>   }
> }
> {code}
> It appears that the HdfsResource call no longer skips the task when the 
> only_if condition is set to None.
> For the NonRolling Upgrade Packs, change the title of the groups,
> Rename the high-level and low-level to be this…
> - Stop Components for High-Level Services
> - Stop Components for Core Services (HDFS, HBase, ZooKeeper and Ranger)
> Rename these others…
> - Zookeeper => ZooKeeper
> - MR and YARN => YARN and MapReduce2 
> - Take Backups =>Perform Backups
> - Update Desired Stack Id => Update Target Stack



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to