[jira] [Commented] (AMBARI-21542) AMS fail to start after IOP 4.2 to HDP 2.6.2 upgrade

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095749#comment-16095749
 ] 

Hudson commented on AMBARI-21542:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1730 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1730/])
AMBARI-21542. AMS fail to start after IOP 4.2 to HDP 2.6.2 upgrade. (swagle: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=a7cc3801a317bf95d6037250ea223e67d54502d2])
* (edit) 
ambari-server/src/main/resources/stacks/BigInsights/4.2/services/AMBARI_METRICS/configuration/ams-site.xml
* (add) 
ambari-server/src/main/resources/stacks/BigInsights/4.2/services/AMBARI_METRICS/configuration/ams-ssl-client.xml
* (edit) 
ambari-server/src/main/resources/stacks/BigInsights/4.2/services/AMBARI_METRICS/configuration/ams-hbase-site.xml
* (add) 
ambari-server/src/main/resources/stacks/BigInsights/4.2/services/AMBARI_METRICS/configuration/ams-grafana-env.xml
* (add) 
ambari-server/src/main/resources/stacks/BigInsights/4.2/services/AMBARI_METRICS/configuration/ams-ssl-server.xml
* (add) 
ambari-server/src/main/resources/stacks/BigInsights/4.2/services/AMBARI_METRICS/configuration/ams-grafana-ini.xml


> AMS fail to start after IOP 4.2 to HDP 2.6.2 upgrade
> 
>
> Key: AMBARI-21542
> URL: https://issues.apache.org/jira/browse/AMBARI-21542
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Critical
> Fix For: 2.5.2
>
>
> After IOP 4.2 to HDP 2.6.2 upgrade, AMS fails to start due to missing Grafana 
> configuration.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
>  line 68, in 
> AmsMonitor().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
>  line 39, in start
> self.configure(env) # for security
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 119, in locking_configure
> original_configure(obj, *args, **kw)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
>  line 34, in configure
> import params
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py",
>  line 29, in 
> import status_params
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py",
>  line 27, in 
> from params_linux import *
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/params_linux.py",
>  line 62, in 
> grafana_pid_file = format("{ams_grafana_pid_dir}/grafana-server.pid")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py",
>  line 95, in format
> return ConfigurationFormatter().format(format_string, args, **result)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py",
>  line 59, in format
> result_protected = self.vformat(format_string, args, all_params)
>   File "/usr/lib64/python2.7/string.py", line 549, in vformat
> result = self._vformat(format_string, args, kwargs, used_args, 2)
>   File "/usr/lib64/python2.7/string.py", line 582, in _vformat
> result.append(self.format_field(obj, format_spec))
>   File "/usr/lib64/python2.7/string.py", line 599, in format_field
> return format(value, format_spec)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
>  line 73, in __getattr__
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'ams-grafana-env' was not found in configurations dictionary!
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-21542) AMS fail to start after IOP 4.2 to HDP 2.6.2 upgrade

2017-07-20 Thread Siddharth Wagle (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle resolved AMBARI-21542.
--
Resolution: Fixed

Pushed to branch-2.5

> AMS fail to start after IOP 4.2 to HDP 2.6.2 upgrade
> 
>
> Key: AMBARI-21542
> URL: https://issues.apache.org/jira/browse/AMBARI-21542
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Critical
> Fix For: 2.5.2
>
>
> After IOP 4.2 to HDP 2.6.2 upgrade, AMS fails to start due to missing Grafana 
> configuration.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
>  line 68, in 
> AmsMonitor().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
>  line 39, in start
> self.configure(env) # for security
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 119, in locking_configure
> original_configure(obj, *args, **kw)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
>  line 34, in configure
> import params
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py",
>  line 29, in 
> import status_params
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py",
>  line 27, in 
> from params_linux import *
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/params_linux.py",
>  line 62, in 
> grafana_pid_file = format("{ams_grafana_pid_dir}/grafana-server.pid")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py",
>  line 95, in format
> return ConfigurationFormatter().format(format_string, args, **result)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py",
>  line 59, in format
> result_protected = self.vformat(format_string, args, all_params)
>   File "/usr/lib64/python2.7/string.py", line 549, in vformat
> result = self._vformat(format_string, args, kwargs, used_args, 2)
>   File "/usr/lib64/python2.7/string.py", line 582, in _vformat
> result.append(self.format_field(obj, format_spec))
>   File "/usr/lib64/python2.7/string.py", line 599, in format_field
> return format(value, format_spec)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
>  line 73, in __getattr__
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'ams-grafana-env' was not found in configurations dictionary!
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21541) Restart services failed post Ambari Upgrade

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095635#comment-16095635
 ] 

Hudson commented on AMBARI-21541:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1729 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1729/])
AMBARI-21541 Restart services failed post Ambari Upgrade (dili) (dili: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=ab1d378d2d2b244eec4fea6a87dca54b4d42073e])
* (edit) 
ambari-server/src/main/resources/stacks/BigInsights/4.0/hooks/before-ANY/scripts/shared_initialization.py
* (edit) 
ambari-server/src/main/resources/stacks/BigInsights/4.2.5/hooks/before-ANY/scripts/shared_initialization.py


> Restart services failed post Ambari Upgrade
> ---
>
> Key: AMBARI-21541
> URL: https://issues.apache.org/jira/browse/AMBARI-21541
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Di Li
>Assignee: Di Li
> Fix For: 2.5.2
>
> Attachments: AMBARI-21541.patch
>
>
> Py API was updated in AMBARI-21531. Client component restart fails after 
> Ambari upgrade while running custom hook script on Suse 11. This causes the 
> before-ANY hook in BI 4.2 and 4.2.5 stack to fail to execute with error:
> resource_management.core.exceptions.InvalidArgument: User['hive'] Expected an 
> integer for uid received '1001'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21541) Restart services failed post Ambari Upgrade

2017-07-20 Thread Di Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Di Li updated AMBARI-21541:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Restart services failed post Ambari Upgrade
> ---
>
> Key: AMBARI-21541
> URL: https://issues.apache.org/jira/browse/AMBARI-21541
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Di Li
>Assignee: Di Li
> Fix For: 2.5.2
>
> Attachments: AMBARI-21541.patch
>
>
> Py API was updated in AMBARI-21531. Client component restart fails after 
> Ambari upgrade while running custom hook script on Suse 11. This causes the 
> before-ANY hook in BI 4.2 and 4.2.5 stack to fail to execute with error:
> resource_management.core.exceptions.InvalidArgument: User['hive'] Expected an 
> integer for uid received '1001'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21541) Restart services failed post Ambari Upgrade

2017-07-20 Thread Di Li (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095613#comment-16095613
 ] 

Di Li commented on AMBARI-21541:


pushed to branch-2.5 as 
https://git-wip-us.apache.org/repos/asf?p=ambari.git;a=commit;h=ab1d378d2d2b244eec4fea6a87dca54b4d42073e

> Restart services failed post Ambari Upgrade
> ---
>
> Key: AMBARI-21541
> URL: https://issues.apache.org/jira/browse/AMBARI-21541
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Di Li
>Assignee: Di Li
> Fix For: 2.5.2
>
> Attachments: AMBARI-21541.patch
>
>
> Py API was updated in AMBARI-21531. Client component restart fails after 
> Ambari upgrade while running custom hook script on Suse 11. This causes the 
> before-ANY hook in BI 4.2 and 4.2.5 stack to fail to execute with error:
> resource_management.core.exceptions.InvalidArgument: User['hive'] Expected an 
> integer for uid received '1001'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21542) AMS fail to start after IOP 4.2 to HDP 2.6.2 upgrade

2017-07-20 Thread Siddharth Wagle (JIRA)
Siddharth Wagle created AMBARI-21542:


 Summary: AMS fail to start after IOP 4.2 to HDP 2.6.2 upgrade
 Key: AMBARI-21542
 URL: https://issues.apache.org/jira/browse/AMBARI-21542
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.5.2
Reporter: Siddharth Wagle
Assignee: Siddharth Wagle
Priority: Critical
 Fix For: 2.5.2


After IOP 4.2 to HDP 2.6.2 upgrade, AMS fails to start due to missing Grafana 
configuration.

{code}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
 line 68, in 
AmsMonitor().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 329, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
 line 39, in start
self.configure(env) # for security
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 119, in locking_configure
original_configure(obj, *args, **kw)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py",
 line 34, in configure
import params
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/params.py",
 line 29, in 
import status_params
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py",
 line 27, in 
from params_linux import *
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/params_linux.py",
 line 62, in 
grafana_pid_file = format("{ams_grafana_pid_dir}/grafana-server.pid")
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py",
 line 95, in format
return ConfigurationFormatter().format(format_string, args, **result)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py",
 line 59, in format
result_protected = self.vformat(format_string, args, all_params)
  File "/usr/lib64/python2.7/string.py", line 549, in vformat
result = self._vformat(format_string, args, kwargs, used_args, 2)
  File "/usr/lib64/python2.7/string.py", line 582, in _vformat
result.append(self.format_field(obj, format_spec))
  File "/usr/lib64/python2.7/string.py", line 599, in format_field
return format(value, format_spec)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
 line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in 
configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 
'ams-grafana-env' was not found in configurations dictionary!
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (AMBARI-21463) Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI

2017-07-20 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez reopened AMBARI-21463:
--

> Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, 
> stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI
> ---
>
> Key: AMBARI-21463
> URL: https://issues.apache.org/jira/browse/AMBARI-21463
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.2
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Blocker
>  Labels: AMBARI-21348
> Fix For: 2.5.2
>
> Attachments: AMBARI-21463.addendum.patch, AMBARI-21463.patch
>
>
> Oozie Server restart failed due to this: Unable to copy 
> /usr/share/HDP-oozie/ext-2.2.zip because it does not exist
> Doesn't look like HDP rpms created this path:
> {code}
> [root@sid-test-2 ~]# ls -l /var/lib/oozie/ext-2.2.zip
> -rwxr-xr-x. 1 oozie hadoop 6800612 Jul  5 18:03 /var/lib/oozie/ext-2.2.zip
> [root@sid-test-2 ~]# ls -l /usr/hdp/2.6.1.0-129/oozie/libext/ext-2.2.zip
> -rw-r--r--. 1 oozie hadoop 6800612 Jul  6 16:36 
> /usr/hdp/2.6.1.0-129/oozie/libext/ext-2.2.zip
> {code}
> The ext2js rpm seems to come from IOPUtils:
> {code}
> [root@sid-test-2 oozie]# yum list | grep extjs
> extjs.noarch  2.2_IBM_2-1
> @IOP-UTILS-1.3
> [root@sid-test-2 oozie]# rpm -qa | grep extjs
> extjs-2.2_IBM_2-1.noarch
> {code}
> We should swap the source from
> {noformat}
> /usr/share/HDP-oozie/ext-2.2.zip
> {noformat}
> to
> {noformat}
> /usr/share/BIGINSIGHTS-oozie/ext-2.2.zip
> {noformat}
> since the latter does exist.
> Also, restarting Oozie Clients during EU is failing because stack_tools.py is 
> missing the "get_stack_name" function in the __all__ variable.
> Lastly, disable showing the BigInsights stack by default in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21463) Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI

2017-07-20 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-21463:
-
Attachment: AMBARI-21463.patch

> Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, 
> stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI
> ---
>
> Key: AMBARI-21463
> URL: https://issues.apache.org/jira/browse/AMBARI-21463
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.2
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Blocker
>  Labels: AMBARI-21348
> Fix For: 2.5.2
>
> Attachments: AMBARI-21463.addendum.patch, AMBARI-21463.patch
>
>
> Oozie Server restart failed due to this: Unable to copy 
> /usr/share/HDP-oozie/ext-2.2.zip because it does not exist
> Doesn't look like HDP rpms created this path:
> {code}
> [root@sid-test-2 ~]# ls -l /var/lib/oozie/ext-2.2.zip
> -rwxr-xr-x. 1 oozie hadoop 6800612 Jul  5 18:03 /var/lib/oozie/ext-2.2.zip
> [root@sid-test-2 ~]# ls -l /usr/hdp/2.6.1.0-129/oozie/libext/ext-2.2.zip
> -rw-r--r--. 1 oozie hadoop 6800612 Jul  6 16:36 
> /usr/hdp/2.6.1.0-129/oozie/libext/ext-2.2.zip
> {code}
> The ext2js rpm seems to come from IOPUtils:
> {code}
> [root@sid-test-2 oozie]# yum list | grep extjs
> extjs.noarch  2.2_IBM_2-1
> @IOP-UTILS-1.3
> [root@sid-test-2 oozie]# rpm -qa | grep extjs
> extjs-2.2_IBM_2-1.noarch
> {code}
> We should swap the source from
> {noformat}
> /usr/share/HDP-oozie/ext-2.2.zip
> {noformat}
> to
> {noformat}
> /usr/share/BIGINSIGHTS-oozie/ext-2.2.zip
> {noformat}
> since the latter does exist.
> Also, restarting Oozie Clients during EU is failing because stack_tools.py is 
> missing the "get_stack_name" function in the __all__ variable.
> Lastly, disable showing the BigInsights stack by default in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21463) Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI

2017-07-20 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-21463:
-
Status: Patch Available  (was: Reopened)

> Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, 
> stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI
> ---
>
> Key: AMBARI-21463
> URL: https://issues.apache.org/jira/browse/AMBARI-21463
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.2
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Blocker
>  Labels: AMBARI-21348
> Fix For: 2.5.2
>
> Attachments: AMBARI-21463.addendum.patch, AMBARI-21463.patch
>
>
> Oozie Server restart failed due to this: Unable to copy 
> /usr/share/HDP-oozie/ext-2.2.zip because it does not exist
> Doesn't look like HDP rpms created this path:
> {code}
> [root@sid-test-2 ~]# ls -l /var/lib/oozie/ext-2.2.zip
> -rwxr-xr-x. 1 oozie hadoop 6800612 Jul  5 18:03 /var/lib/oozie/ext-2.2.zip
> [root@sid-test-2 ~]# ls -l /usr/hdp/2.6.1.0-129/oozie/libext/ext-2.2.zip
> -rw-r--r--. 1 oozie hadoop 6800612 Jul  6 16:36 
> /usr/hdp/2.6.1.0-129/oozie/libext/ext-2.2.zip
> {code}
> The ext2js rpm seems to come from IOPUtils:
> {code}
> [root@sid-test-2 oozie]# yum list | grep extjs
> extjs.noarch  2.2_IBM_2-1
> @IOP-UTILS-1.3
> [root@sid-test-2 oozie]# rpm -qa | grep extjs
> extjs-2.2_IBM_2-1.noarch
> {code}
> We should swap the source from
> {noformat}
> /usr/share/HDP-oozie/ext-2.2.zip
> {noformat}
> to
> {noformat}
> /usr/share/BIGINSIGHTS-oozie/ext-2.2.zip
> {noformat}
> since the latter does exist.
> Also, restarting Oozie Clients during EU is failing because stack_tools.py is 
> missing the "get_stack_name" function in the __all__ variable.
> Lastly, disable showing the BigInsights stack by default in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21463) Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI

2017-07-20 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-21463:
-
Attachment: AMBARI-21463.addendum.patch

> Cross-stack upgrade, Oozie restart fails with ext-2.2.zip missing error, 
> stack_tools.py is missing get_stack_name in __all__, disable BigInsights in UI
> ---
>
> Key: AMBARI-21463
> URL: https://issues.apache.org/jira/browse/AMBARI-21463
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.2
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Blocker
>  Labels: AMBARI-21348
> Fix For: 2.5.2
>
> Attachments: AMBARI-21463.addendum.patch
>
>
> Oozie Server restart failed due to this: Unable to copy 
> /usr/share/HDP-oozie/ext-2.2.zip because it does not exist
> Doesn't look like HDP rpms created this path:
> {code}
> [root@sid-test-2 ~]# ls -l /var/lib/oozie/ext-2.2.zip
> -rwxr-xr-x. 1 oozie hadoop 6800612 Jul  5 18:03 /var/lib/oozie/ext-2.2.zip
> [root@sid-test-2 ~]# ls -l /usr/hdp/2.6.1.0-129/oozie/libext/ext-2.2.zip
> -rw-r--r--. 1 oozie hadoop 6800612 Jul  6 16:36 
> /usr/hdp/2.6.1.0-129/oozie/libext/ext-2.2.zip
> {code}
> The ext2js rpm seems to come from IOPUtils:
> {code}
> [root@sid-test-2 oozie]# yum list | grep extjs
> extjs.noarch  2.2_IBM_2-1
> @IOP-UTILS-1.3
> [root@sid-test-2 oozie]# rpm -qa | grep extjs
> extjs-2.2_IBM_2-1.noarch
> {code}
> We should swap the source from
> {noformat}
> /usr/share/HDP-oozie/ext-2.2.zip
> {noformat}
> to
> {noformat}
> /usr/share/BIGINSIGHTS-oozie/ext-2.2.zip
> {noformat}
> since the latter does exist.
> Also, restarting Oozie Clients during EU is failing because stack_tools.py is 
> missing the "get_stack_name" function in the __all__ variable.
> Lastly, disable showing the BigInsights stack by default in the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21528) Zookeeper server has incorrect memory setting, missing m in Xmx value

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095505#comment-16095505
 ] 

Hudson commented on AMBARI-21528:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7793 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7793/])
AMBARI-21528. Zookeeper server has incorrect memory setting, missing m 
(afernandez: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=2a298a3f707c4a3702d0f70e927946540661c916])
* (edit) 
ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py


> Zookeeper server has incorrect memory setting, missing m in Xmx value
> -
>
> Key: AMBARI-21528
> URL: https://issues.apache.org/jira/browse/AMBARI-21528
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Blocker
>  Labels: AMBARI-21348
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21528.patch
>
>
> Repro Steps:
> * Installed BI 4.2.0 cluster on IBM Ambari 2.2.2 with Zookeeper
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.2.0 repo, installed packages
> * Ran service checks
> * Started Express Upgrade
> Result: _Service Check ZooKeeper_ step failed with {{KeeperErrorCode = 
> ConnectionLoss for /zk_smoketest}}
> This was caused by Zookeeper dying immediately during restart:
> {noformat}
> Error occurred during initialization of VM
> Too small initial heap
> {noformat}
> {noformat:title=zookeeper-env.sh before upgrade}
> export JAVA_HOME=/usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64
> export ZOOKEEPER_HOME=/usr/iop/current/zookeeper-server
> export ZOO_LOG_DIR=/var/log/zookeeper
> export ZOOPIDFILE=/var/run/zookeeper/zookeeper_server.pid
> export SERVER_JVMFLAGS=-Xmx1024m
> export JAVA=$JAVA_HOME/bin/java
> export CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*
> {noformat}
> {noformat:title=zookeeper-env.sh after upgrade}
> export JAVA_HOME=/usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64
> export ZOOKEEPER_HOME=/usr/hdp/current/zookeeper-client
> export ZOO_LOG_DIR=/var/log/zookeeper
> export ZOOPIDFILE=/var/run/zookeeper/zookeeper_server.pid
> export SERVER_JVMFLAGS=-Xmx1024
> export JAVA=$JAVA_HOME/bin/java
> export CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*
> {noformat}
> Note missing "m" in memory setting.
> zookeeper-env template contains,
> {noformat}
> export SERVER_JVMFLAGS={{zk_server_heapsize}}
> {noformat}
> In this cluster, zookeeper-env contains,
> zk_server_heapsize: "1024"
> While the params_linux.py file has some inconsistencies with appending the 
> letter "m".
> {noformat}
> zk_server_heapsize_value = 
> str(default('configurations/zookeeper-env/zk_server_heapsize', "1024m"))
> zk_server_heapsize = format("-Xmx{zk_server_heapsize_value}")
> {noformat}
> Instead, it should be,
> {noformat}
> zk_server_heapsize_value = 
> str(default('configurations/zookeeper-env/zk_server_heapsize', "1024"))
> zk_server_heapsize_value = zk_server_heapsize_value.strip()
> if len(zk_server_heapsize_value) > 0 and not 
> zk_server_heapsize_value[-1].isdigit():
>   zk_server_heapsize_value = zk_server_heapsize_value + "m"
> zk_server_heapsize = format("-Xmx{zk_server_heapsize_value}")
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21530) Service Checks During Upgrades Should Use Desired Stack

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095457#comment-16095457
 ] 

Hudson commented on AMBARI-21530:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #7792 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7792/])
AMBARI-21530 - Service Checks During Upgrades Should Use Desired Stack 
(jhurley: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=e87a3e31a9a18c5178f1170cef15c4de47f6808e])
* (edit) 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariCustomCommandExecutionHelper.java
* (edit) 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
* (edit) ambari-server/src/test/python/TestStackFeature.py
* (edit) 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ExecutionCommandWrapper.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java


> Service Checks During Upgrades Should Use Desired Stack
> ---
>
> Key: AMBARI-21530
> URL: https://issues.apache.org/jira/browse/AMBARI-21530
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: 2.5.2
>
> Attachments: AMBARI-21530.patch
>
>
> During an upgrade from BI 4.2 to HDP 2.6, some service checks were failing. 
> This is because the service checks were having their hooks/service folders 
> overwritten by some of the scheduler framework. At the time of orchestration, 
> the cluster desired ID was still BI but the effective ID used for the upgrade 
> was HDP (which was being clobbered)
> Exception on running YARN service check:
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 91, in 
> ServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 54, in service_check
> user=params.smokeuser,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell*.jar' returned 
> 1. 17/07/19 19:34:40 INFO distributedshell.Client: Initializing Client
> 17/07/19 19:34:40 INFO distributedshell.Client: Running Client
> 17/07/19 19:34:40 INFO client.RMProxy: Connecting to ResourceManager at 
> sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:8050
> 17/07/19 19:34:40 INFO client.AHSProxy: Connecting to Application History 
> server at sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:10200
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=1
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster node info from ASM
> 17/07/19 19:34:40 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=sid-bigi-3.c.pramod-thangali.internal:45454, 
> nodeAddresssid-bigi-3.c.pramod-thangali.internal:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 17/07/19 19:34:40 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=ADMINISTER_QUEUE
> 

[jira] [Commented] (AMBARI-21528) Zookeeper server has incorrect memory setting, missing m in Xmx value

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095450#comment-16095450
 ] 

Hudson commented on AMBARI-21528:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1728 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1728/])
AMBARI-21528. Zookeeper server has incorrect memory setting, missing m 
(afernandez: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=d4244f5206feca1bb6001eea6d550494f69e8762])
* (edit) 
ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/params_linux.py


> Zookeeper server has incorrect memory setting, missing m in Xmx value
> -
>
> Key: AMBARI-21528
> URL: https://issues.apache.org/jira/browse/AMBARI-21528
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Blocker
>  Labels: AMBARI-21348
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21528.patch
>
>
> Repro Steps:
> * Installed BI 4.2.0 cluster on IBM Ambari 2.2.2 with Zookeeper
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.2.0 repo, installed packages
> * Ran service checks
> * Started Express Upgrade
> Result: _Service Check ZooKeeper_ step failed with {{KeeperErrorCode = 
> ConnectionLoss for /zk_smoketest}}
> This was caused by Zookeeper dying immediately during restart:
> {noformat}
> Error occurred during initialization of VM
> Too small initial heap
> {noformat}
> {noformat:title=zookeeper-env.sh before upgrade}
> export JAVA_HOME=/usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64
> export ZOOKEEPER_HOME=/usr/iop/current/zookeeper-server
> export ZOO_LOG_DIR=/var/log/zookeeper
> export ZOOPIDFILE=/var/run/zookeeper/zookeeper_server.pid
> export SERVER_JVMFLAGS=-Xmx1024m
> export JAVA=$JAVA_HOME/bin/java
> export CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*
> {noformat}
> {noformat:title=zookeeper-env.sh after upgrade}
> export JAVA_HOME=/usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64
> export ZOOKEEPER_HOME=/usr/hdp/current/zookeeper-client
> export ZOO_LOG_DIR=/var/log/zookeeper
> export ZOOPIDFILE=/var/run/zookeeper/zookeeper_server.pid
> export SERVER_JVMFLAGS=-Xmx1024
> export JAVA=$JAVA_HOME/bin/java
> export CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*
> {noformat}
> Note missing "m" in memory setting.
> zookeeper-env template contains,
> {noformat}
> export SERVER_JVMFLAGS={{zk_server_heapsize}}
> {noformat}
> In this cluster, zookeeper-env contains,
> zk_server_heapsize: "1024"
> While the params_linux.py file has some inconsistencies with appending the 
> letter "m".
> {noformat}
> zk_server_heapsize_value = 
> str(default('configurations/zookeeper-env/zk_server_heapsize', "1024m"))
> zk_server_heapsize = format("-Xmx{zk_server_heapsize_value}")
> {noformat}
> Instead, it should be,
> {noformat}
> zk_server_heapsize_value = 
> str(default('configurations/zookeeper-env/zk_server_heapsize', "1024"))
> zk_server_heapsize_value = zk_server_heapsize_value.strip()
> if len(zk_server_heapsize_value) > 0 and not 
> zk_server_heapsize_value[-1].isdigit():
>   zk_server_heapsize_value = zk_server_heapsize_value + "m"
> zk_server_heapsize = format("-Xmx{zk_server_heapsize_value}")
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21345) Add host doesn't fully add a node when include/exclude files are used

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095443#comment-16095443
 ] 

Hadoop QA commented on AMBARI-21345:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12878187/AMBARI-21345_addiotional.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
ambari-server:

  
org.apache.ambari.server.controller.AmbariManagementControllerTest

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11831//console

This message is automatically generated.

> Add host doesn't fully add a node when include/exclude files are used
> -
>
> Key: AMBARI-21345
> URL: https://issues.apache.org/jira/browse/AMBARI-21345
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Reporter: Paul Codding
>Assignee: Dmytro Sen
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21345_5.patch, AMBARI-21345_addiotional.patch
>
>
> When using dfs.include/dfs.exclude files for HDFS and 
> yarn.include/yarn.exclude for YARN, we need to ensure these files are updated 
> whenever a host is added or removed, and we should also make sure su -l hdfs 
> -c "hdfs dfsadmin -refreshNodes" for HDFS and su -l yarn -c "yarn rmadmin 
> -refreshNodes" for YARN is run after the host has been added and the 
> corresponding HDFS/YARN files are updated.
> Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21528) Zookeeper server has incorrect memory setting, missing m in Xmx value

2017-07-20 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-21528:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-2.5, commit d4244f5206feca1bb6001eea6d550494f69e8762
trunk, commit 2a298a3f707c4a3702d0f70e927946540661c916

> Zookeeper server has incorrect memory setting, missing m in Xmx value
> -
>
> Key: AMBARI-21528
> URL: https://issues.apache.org/jira/browse/AMBARI-21528
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.2
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Blocker
>  Labels: AMBARI-21348
> Fix For: trunk, 2.5.2
>
> Attachments: AMBARI-21528.patch
>
>
> Repro Steps:
> * Installed BI 4.2.0 cluster on IBM Ambari 2.2.2 with Zookeeper
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.2.0 repo, installed packages
> * Ran service checks
> * Started Express Upgrade
> Result: _Service Check ZooKeeper_ step failed with {{KeeperErrorCode = 
> ConnectionLoss for /zk_smoketest}}
> This was caused by Zookeeper dying immediately during restart:
> {noformat}
> Error occurred during initialization of VM
> Too small initial heap
> {noformat}
> {noformat:title=zookeeper-env.sh before upgrade}
> export JAVA_HOME=/usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64
> export ZOOKEEPER_HOME=/usr/iop/current/zookeeper-server
> export ZOO_LOG_DIR=/var/log/zookeeper
> export ZOOPIDFILE=/var/run/zookeeper/zookeeper_server.pid
> export SERVER_JVMFLAGS=-Xmx1024m
> export JAVA=$JAVA_HOME/bin/java
> export CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*
> {noformat}
> {noformat:title=zookeeper-env.sh after upgrade}
> export JAVA_HOME=/usr/jdk64/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64
> export ZOOKEEPER_HOME=/usr/hdp/current/zookeeper-client
> export ZOO_LOG_DIR=/var/log/zookeeper
> export ZOOPIDFILE=/var/run/zookeeper/zookeeper_server.pid
> export SERVER_JVMFLAGS=-Xmx1024
> export JAVA=$JAVA_HOME/bin/java
> export CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*
> {noformat}
> Note missing "m" in memory setting.
> zookeeper-env template contains,
> {noformat}
> export SERVER_JVMFLAGS={{zk_server_heapsize}}
> {noformat}
> In this cluster, zookeeper-env contains,
> zk_server_heapsize: "1024"
> While the params_linux.py file has some inconsistencies with appending the 
> letter "m".
> {noformat}
> zk_server_heapsize_value = 
> str(default('configurations/zookeeper-env/zk_server_heapsize', "1024m"))
> zk_server_heapsize = format("-Xmx{zk_server_heapsize_value}")
> {noformat}
> Instead, it should be,
> {noformat}
> zk_server_heapsize_value = 
> str(default('configurations/zookeeper-env/zk_server_heapsize', "1024"))
> zk_server_heapsize_value = zk_server_heapsize_value.strip()
> if len(zk_server_heapsize_value) > 0 and not 
> zk_server_heapsize_value[-1].isdigit():
>   zk_server_heapsize_value = zk_server_heapsize_value + "m"
> zk_server_heapsize = format("-Xmx{zk_server_heapsize_value}")
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21541) Restart services failed post Ambari Upgrade

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095390#comment-16095390
 ] 

Hadoop QA commented on AMBARI-21541:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12878244/AMBARI-21541.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11830//console

This message is automatically generated.

> Restart services failed post Ambari Upgrade
> ---
>
> Key: AMBARI-21541
> URL: https://issues.apache.org/jira/browse/AMBARI-21541
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Di Li
>Assignee: Di Li
> Fix For: 2.5.2
>
> Attachments: AMBARI-21541.patch
>
>
> Py API was updated in AMBARI-21531. Client component restart fails after 
> Ambari upgrade while running custom hook script on Suse 11. This causes the 
> before-ANY hook in BI 4.2 and 4.2.5 stack to fail to execute with error:
> resource_management.core.exceptions.InvalidArgument: User['hive'] Expected an 
> integer for uid received '1001'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21539) Resource Manager fails to restart properly during an IOP to HDP upgrade

2017-07-20 Thread Sumit Mohanty (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Mohanty updated AMBARI-21539:
---
Priority: Critical  (was: Major)

> Resource Manager fails to restart properly during an IOP to HDP upgrade
> ---
>
> Key: AMBARI-21539
> URL: https://issues.apache.org/jira/browse/AMBARI-21539
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Tim Thorpe
>Assignee: Attila Magyar
>Priority: Critical
> Fix For: 2.5.2
>
>
> 2017-07-20 06:19:28,250 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(272)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore failed in 
> state STARTED; cause: org.apache.zookeeper.KeeperException$NoAuthException: 
> KeeperErrorCode = NoAuth for /rmstore/ZKRMStateRoot
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
> NoAuth for /rmstore/ZKRMStateRoot
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$3.run(ZKRMStateStore.java:372)
> Workaround:
> /usr/hdp/current/zookeeper-server/bin/zkCli.sh -server 127.0.0.1:2181
> rmr /rmstore
> This workaround fails in a kerberized environment with this error:
> Authentication is not valid : /rmstore/ZKRMStateRoot



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21539) Resource Manager fails to restart properly during an IOP to HDP upgrade

2017-07-20 Thread Sumit Mohanty (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Mohanty updated AMBARI-21539:
---
Fix Version/s: 2.5.2

> Resource Manager fails to restart properly during an IOP to HDP upgrade
> ---
>
> Key: AMBARI-21539
> URL: https://issues.apache.org/jira/browse/AMBARI-21539
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Tim Thorpe
>Assignee: Attila Magyar
> Fix For: 2.5.2
>
>
> 2017-07-20 06:19:28,250 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(272)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore failed in 
> state STARTED; cause: org.apache.zookeeper.KeeperException$NoAuthException: 
> KeeperErrorCode = NoAuth for /rmstore/ZKRMStateRoot
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
> NoAuth for /rmstore/ZKRMStateRoot
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$3.run(ZKRMStateStore.java:372)
> Workaround:
> /usr/hdp/current/zookeeper-server/bin/zkCli.sh -server 127.0.0.1:2181
> rmr /rmstore
> This workaround fails in a kerberized environment with this error:
> Authentication is not valid : /rmstore/ZKRMStateRoot



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (AMBARI-21539) Resource Manager fails to restart properly during an IOP to HDP upgrade

2017-07-20 Thread Sumit Mohanty (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Mohanty reassigned AMBARI-21539:
--

Assignee: Attila Magyar

> Resource Manager fails to restart properly during an IOP to HDP upgrade
> ---
>
> Key: AMBARI-21539
> URL: https://issues.apache.org/jira/browse/AMBARI-21539
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Tim Thorpe
>Assignee: Attila Magyar
> Fix For: 2.5.2
>
>
> 2017-07-20 06:19:28,250 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(272)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore failed in 
> state STARTED; cause: org.apache.zookeeper.KeeperException$NoAuthException: 
> KeeperErrorCode = NoAuth for /rmstore/ZKRMStateRoot
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
> NoAuth for /rmstore/ZKRMStateRoot
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$3.run(ZKRMStateStore.java:372)
> Workaround:
> /usr/hdp/current/zookeeper-server/bin/zkCli.sh -server 127.0.0.1:2181
> rmr /rmstore
> This workaround fails in a kerberized environment with this error:
> Authentication is not valid : /rmstore/ZKRMStateRoot



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21541) Restart services failed post Ambari Upgrade

2017-07-20 Thread Di Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Di Li updated AMBARI-21541:
---
Status: Patch Available  (was: Open)

> Restart services failed post Ambari Upgrade
> ---
>
> Key: AMBARI-21541
> URL: https://issues.apache.org/jira/browse/AMBARI-21541
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Di Li
>Assignee: Di Li
> Fix For: 2.5.2
>
> Attachments: AMBARI-21541.patch
>
>
> Py API was updated in AMBARI-21531. Client component restart fails after 
> Ambari upgrade while running custom hook script on Suse 11. This causes the 
> before-ANY hook in BI 4.2 and 4.2.5 stack to fail to execute with error:
> resource_management.core.exceptions.InvalidArgument: User['hive'] Expected an 
> integer for uid received '1001'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21541) Restart services failed post Ambari Upgrade

2017-07-20 Thread Di Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Di Li updated AMBARI-21541:
---
Attachment: AMBARI-21541.patch

> Restart services failed post Ambari Upgrade
> ---
>
> Key: AMBARI-21541
> URL: https://issues.apache.org/jira/browse/AMBARI-21541
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Di Li
>Assignee: Di Li
> Fix For: 2.5.2
>
> Attachments: AMBARI-21541.patch
>
>
> Py API was updated in AMBARI-21531. Client component restart fails after 
> Ambari upgrade while running custom hook script on Suse 11. This causes the 
> before-ANY hook in BI 4.2 and 4.2.5 stack to fail to execute with error:
> resource_management.core.exceptions.InvalidArgument: User['hive'] Expected an 
> integer for uid received '1001'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21541) Restart services failed post Ambari Upgrade

2017-07-20 Thread Di Li (JIRA)
Di Li created AMBARI-21541:
--

 Summary: Restart services failed post Ambari Upgrade
 Key: AMBARI-21541
 URL: https://issues.apache.org/jira/browse/AMBARI-21541
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.5.2
Reporter: Di Li
Assignee: Di Li
 Fix For: 2.5.2


Py API was updated in AMBARI-21531. Client component restart fails after Ambari 
upgrade while running custom hook script on Suse 11. This causes the before-ANY 
hook in BI 4.2 and 4.2.5 stack to fail to execute with error:

resource_management.core.exceptions.InvalidArgument: User['hive'] Expected an 
integer for uid received '1001'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-21527) Restart of MR2 History Server failed due to wrong NameNode RPC address

2017-07-20 Thread Di Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Di Li resolved AMBARI-21527.

Resolution: Fixed

> Restart of MR2 History Server failed due to wrong NameNode RPC address
> --
>
> Key: AMBARI-21527
> URL: https://issues.apache.org/jira/browse/AMBARI-21527
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Di Li
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21527-HA_and_NonHA.patch, AMBARI-21527.patch
>
>
> P.S
>   This happens on NN restart (kerberos cluster) and remote DS restart (non 
> secured cluster) as well.
> Steps:
> * Installed BI 4.2 cluster on Ambari 2.2 with Slider and services it required
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.1.0 repo, installed packages
> * Restarted services that needed restart
> * Ran service checks
> * Started upgrade
> Result: _Restarting History Server_ step failed with 
> {noformat:title=errors-87.txt}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 134, in 
> HistoryServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 841, in restart
> self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 85, in pre_upgrade_restart
> copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, 
> skip=params.sysprep_skip_copy_tarballs_hdfs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py",
>  line 267, in copy_to_hdfs
> replace_existing_files=replace_existing_files,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 560, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 557, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 292, in action_delayed
> self._create_resource()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 308, in _create_resource
> self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 423, in _create_file
> self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 204, in run_command
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --data-binary 
> @/usr/hdp/2.6.1.0-129/hadoop/mapreduce.tar.gz -H 'Content-Type: 
> application/octet-stream' 
> 'http://c7301.ambari.apache.org:50070/webhdfs/v1/hdp/apps/2.6.1.0-129/mapreduce/mapreduce.tar.gz?op=CREATE=hdfs=True=444''
>  returned status_code=403. 
> {
>   "RemoteException": {
> "exception": "ConnectException", 
> "javaClassName": "java.net.ConnectException", 
> "message": "Call From c7301.ambari.apache.org/192.168.73.101 to 
> c7301.ambari.apache.org:8020 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused;
>   }
> }
> {noformat}
> {noformat:title=NameNode log, pre-upgrade restart}
> 2017-07-18 07:48:05,435 INFO  namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(397)) - fs.defaultFS is 
> hdfs://c7301.ambari.apache.org:8020
> 2017-07-18 07:48:05,436 INFO  namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(417)) - Clients are to use 
> 

[jira] [Commented] (AMBARI-21539) Resource Manager fails to restart properly during an IOP to HDP upgrade

2017-07-20 Thread Tim Thorpe (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095307#comment-16095307
 ] 

Tim Thorpe commented on AMBARI-21539:
-

I have tried changing the yarn.resourcemanager.zk-state-store.parent-path to a 
new value and starting the ResourceManager, that still fails with the same 
error.

> Resource Manager fails to restart properly during an IOP to HDP upgrade
> ---
>
> Key: AMBARI-21539
> URL: https://issues.apache.org/jira/browse/AMBARI-21539
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Tim Thorpe
>
> 2017-07-20 06:19:28,250 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(272)) - Service 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore failed in 
> state STARTED; cause: org.apache.zookeeper.KeeperException$NoAuthException: 
> KeeperErrorCode = NoAuth for /rmstore/ZKRMStateRoot
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
> NoAuth for /rmstore/ZKRMStateRoot
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$3.run(ZKRMStateStore.java:372)
> Workaround:
> /usr/hdp/current/zookeeper-server/bin/zkCli.sh -server 127.0.0.1:2181
> rmr /rmstore
> This workaround fails in a kerberized environment with this error:
> Authentication is not valid : /rmstore/ZKRMStateRoot



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21538) Zeppelin quick links and service check failing in SSL enabled environment.

2017-07-20 Thread amarnath reddy pappu (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

amarnath reddy pappu updated AMBARI-21538:
--
Description: 
>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check and quick link would fail. This may be due 
>to introduction of two distinct properties for a secure (SSL enabled) and non 
>secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321 . The two properties are,

"zeppelin.server.port". This property is by default set to 9995 which is used 
to access the server via a webUI

"zeppelin.server.ssl.port". This property was introduced to be used when SSL 
was enabled. Hence, UI would have to accessed for the port that was defined for 
this property. Default is 8443.


With default settings, we see that.
Zeppelin Quick links would point to 9995 port. But, the service would be 
listening on 8443 port.

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in service_check
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 72, in inner
result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 102, in checked_call
tries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o 
/dev/null -w'%{http_code}' --negotiate -u: -k https://:9995 | grep 
200' returned 1.
{code}


In Summary:
Ambari is always using "zeppelin.server.port" port for Quicklinks where as 
Zeppelin server is started on "zeppelin.server.ssl.port" port number. this 
needs to be fixed. 

Workaround that could be used is to set both the properties, 
"zeppelin.server.port" and "zeppelin.server.ssl.port" to 8443.

  was:
>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check and quick link would fail. This may be due 
>to introduction of two distinct properties for a secure (SSL enabled) and non 
>secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321 . The two properties are,

"zeppelin.server.port". This property is by default set to 9995 which is used 
to access the server via a webUI

"zeppelin.server.ssl.port". This property was introduced to be used when SSL 
was enabled. Hence, UI would have to accessed for the port that was defined for 
this property. Default is 8443.

With default settings, we see that.
Zeppelin Quick links would point to 9995 port. But, the service would be 
listening on 8443 port.

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in 

[jira] [Updated] (AMBARI-21538) Zeppelin quick links and service check failing in SSL enabled environment.

2017-07-20 Thread amarnath reddy pappu (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

amarnath reddy pappu updated AMBARI-21538:
--
Component/s: (was: ambari-admin)
 ambari-server

> Zeppelin quick links and service check failing in SSL enabled environment.
> --
>
> Key: AMBARI-21538
> URL: https://issues.apache.org/jira/browse/AMBARI-21538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Krishnama Raju K
>  Labels: easyfix
>
> From Ambari documentation 
> (https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
>  we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 
> 2.6.1 is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
> observe that Zeppelin service check and quick link would fail. This may be 
> due to introduction of two distinct properties for a secure (SSL enabled) and 
> non secure links as described here, 
> https://issues.apache.org/jira/browse/ZEPPELIN-1321 . The two properties are,
> "zeppelin.server.port". This property is by default set to 9995 which is used 
> to access the server via a webUI
> "zeppelin.server.ssl.port". This property was introduced to be used when SSL 
> was enabled. Hence, UI would have to accessed for the port that was defined 
> for this property. Default is 8443.
> With default settings, we see that.
> Zeppelin Quick links would point to 9995 port. But, the service would be 
> listening on 8443 port.
> Zeppelin service check fails with below stack trace,
> {code:java}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 40, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 314, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 37, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 262, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o 
> /dev/null -w'%{http_code}' --negotiate -u: -k https://:9995 | grep 
> 200' returned 1.
> {code}
> Workaround that could be used is to set both the properties, 
> "zeppelin.server.port" and "zeppelin.server.ssl.port" to 8443.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21538) Zeppelin quick links and service check failing in SSL enabled environment.

2017-07-20 Thread Krishnama Raju K (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishnama Raju K updated AMBARI-21538:
--
Description: 
>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check and quick link would fail. This may be due 
>to introduction of two distinct properties for a secure (SSL enabled) and non 
>secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321 . The two properties are,

"zeppelin.server.port". This property is by default set to 9995 which is used 
to access the server via a webUI

"zeppelin.server.ssl.port". This property was introduced to be used when SSL 
was enabled. Hence, UI would have to accessed for the port that was defined for 
this property. Default is 8443.

With default settings, we see that.
Zeppelin Quick links would point to 9995 port. But, the service would be 
listening on 8443 port.

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in service_check
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 72, in inner
result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 102, in checked_call
tries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o 
/dev/null -w'%{http_code}' --negotiate -u: -k https://:9995 | grep 
200' returned 1.
{code}



Workaround that could be used is to set both the properties, 
"zeppelin.server.port" and "zeppelin.server.ssl.port" to 8443.

  was:
>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check and quick link would fail. This may be due 
>to introduction of two distinct properties for a secure (SSL enabled) and non 
>secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321

Zeppelin Quick links would point to 9995 port. But, the service would be 
listening on 8443 port.

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in service_check
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run
tries=self.resource.tries, 

[jira] [Commented] (AMBARI-21527) Restart of MR2 History Server failed due to wrong NameNode RPC address

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095202#comment-16095202
 ] 

Hudson commented on AMBARI-21527:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1727 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1727/])
AMBARI-21527 Restart of MR2 History Server failed due to wrong NameNode (dili: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=212ee1cb04483a2cd7fafb4304a1c5879f2895dc])
* (edit) 
ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/config-upgrade.xml
* (edit) 
ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
* (edit) 
ambari-server/src/main/resources/stacks/BigInsights/4.2.5/upgrades/nonrolling-upgrade-to-hdp-2.6.xml
* (edit) 
ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml


> Restart of MR2 History Server failed due to wrong NameNode RPC address
> --
>
> Key: AMBARI-21527
> URL: https://issues.apache.org/jira/browse/AMBARI-21527
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Di Li
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21527-HA_and_NonHA.patch, AMBARI-21527.patch
>
>
> P.S
>   This happens on NN restart (kerberos cluster) and remote DS restart (non 
> secured cluster) as well.
> Steps:
> * Installed BI 4.2 cluster on Ambari 2.2 with Slider and services it required
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.1.0 repo, installed packages
> * Restarted services that needed restart
> * Ran service checks
> * Started upgrade
> Result: _Restarting History Server_ step failed with 
> {noformat:title=errors-87.txt}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 134, in 
> HistoryServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 841, in restart
> self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 85, in pre_upgrade_restart
> copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, 
> skip=params.sysprep_skip_copy_tarballs_hdfs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py",
>  line 267, in copy_to_hdfs
> replace_existing_files=replace_existing_files,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 560, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 557, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 292, in action_delayed
> self._create_resource()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 308, in _create_resource
> self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 423, in _create_file
> self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 204, in run_command
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --data-binary 
> @/usr/hdp/2.6.1.0-129/hadoop/mapreduce.tar.gz -H 'Content-Type: 
> application/octet-stream' 
> 'http://c7301.ambari.apache.org:50070/webhdfs/v1/hdp/apps/2.6.1.0-129/mapreduce/mapreduce.tar.gz?op=CREATE=hdfs=True=444''
>  returned status_code=403. 
> {
>   

[jira] [Created] (AMBARI-21540) Kerberos related enhancement request

2017-07-20 Thread amarnath reddy pappu (JIRA)
amarnath reddy pappu created AMBARI-21540:
-

 Summary: Kerberos related enhancement request
 Key: AMBARI-21540
 URL: https://issues.apache.org/jira/browse/AMBARI-21540
 Project: Ambari
  Issue Type: Improvement
  Components: ambari-server
Affects Versions: 2.4.2
Reporter: amarnath reddy pappu
Priority: Minor


1. In UI when user clicks on "Generate only missing keytabs" Ambari shows the 
message as if it is generating keytab for all the principals - this is 
misleading. have to correct that.
2. When service is deleted from Ambari - it does not actually remove the 
Principal in Ambari DB and in AD as well 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21539) Resource Manager fails to restart properly during an IOP to HDP upgrade

2017-07-20 Thread Tim Thorpe (JIRA)
Tim Thorpe created AMBARI-21539:
---

 Summary: Resource Manager fails to restart properly during an IOP 
to HDP upgrade
 Key: AMBARI-21539
 URL: https://issues.apache.org/jira/browse/AMBARI-21539
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.5.2
Reporter: Tim Thorpe


2017-07-20 06:19:28,250 INFO  service.AbstractService 
(AbstractService.java:noteFailure(272)) - Service 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore failed in 
state STARTED; cause: org.apache.zookeeper.KeeperException$NoAuthException: 
KeeperErrorCode = NoAuth for /rmstore/ZKRMStateRoot
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth 
for /rmstore/ZKRMStateRoot
  at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
  at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
  at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399)
  at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$3.run(ZKRMStateStore.java:372)

Workaround:
/usr/hdp/current/zookeeper-server/bin/zkCli.sh -server 127.0.0.1:2181
rmr /rmstore

This workaround fails in a kerberized environment with this error:
Authentication is not valid : /rmstore/ZKRMStateRoot



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21527) Restart of MR2 History Server failed due to wrong NameNode RPC address

2017-07-20 Thread Di Li (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16095154#comment-16095154
 ] 

Di Li commented on AMBARI-21527:


pushed to branch-2.5 as 
https://git-wip-us.apache.org/repos/asf?p=ambari.git;a=commit;h=212ee1cb04483a2cd7fafb4304a1c5879f2895dc

> Restart of MR2 History Server failed due to wrong NameNode RPC address
> --
>
> Key: AMBARI-21527
> URL: https://issues.apache.org/jira/browse/AMBARI-21527
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Di Li
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21527-HA_and_NonHA.patch, AMBARI-21527.patch
>
>
> P.S
>   This happens on NN restart (kerberos cluster) and remote DS restart (non 
> secured cluster) as well.
> Steps:
> * Installed BI 4.2 cluster on Ambari 2.2 with Slider and services it required
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.1.0 repo, installed packages
> * Restarted services that needed restart
> * Ran service checks
> * Started upgrade
> Result: _Restarting History Server_ step failed with 
> {noformat:title=errors-87.txt}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 134, in 
> HistoryServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 841, in restart
> self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 85, in pre_upgrade_restart
> copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, 
> skip=params.sysprep_skip_copy_tarballs_hdfs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py",
>  line 267, in copy_to_hdfs
> replace_existing_files=replace_existing_files,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 560, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 557, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 292, in action_delayed
> self._create_resource()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 308, in _create_resource
> self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 423, in _create_file
> self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 204, in run_command
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --data-binary 
> @/usr/hdp/2.6.1.0-129/hadoop/mapreduce.tar.gz -H 'Content-Type: 
> application/octet-stream' 
> 'http://c7301.ambari.apache.org:50070/webhdfs/v1/hdp/apps/2.6.1.0-129/mapreduce/mapreduce.tar.gz?op=CREATE=hdfs=True=444''
>  returned status_code=403. 
> {
>   "RemoteException": {
> "exception": "ConnectException", 
> "javaClassName": "java.net.ConnectException", 
> "message": "Call From c7301.ambari.apache.org/192.168.73.101 to 
> c7301.ambari.apache.org:8020 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused;
>   }
> }
> {noformat}
> {noformat:title=NameNode log, pre-upgrade restart}
> 2017-07-18 07:48:05,435 INFO  namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(397)) - fs.defaultFS is 
> 

[jira] [Updated] (AMBARI-21538) Zeppelin quick links and service check failing in SSL enabled environment.

2017-07-20 Thread Krishnama Raju K (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishnama Raju K updated AMBARI-21538:
--
Description: 
>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check and quick link would fail. This may be due 
>to introduction of two distinct properties for a secure (SSL enabled) and non 
>secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321

Zeppelin Quick links would point to 9995 port. But, the service would be 
listening on 8443 port.

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in service_check
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 72, in inner
result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 102, in checked_call
tries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o 
/dev/null -w'%{http_code}' --negotiate -u: -k https://:9995 | grep 
200' returned 1.
{code}


  was:
>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check would fail. This may be due to 
>introduction of two distinct properties for a secure (SSL enabled) and non 
>secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in service_check
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 72, in inner
result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 102, in checked_call
tries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 303, in _call

[jira] [Updated] (AMBARI-21538) Zeppelin quick links and service check failing in SSL enabled environment.

2017-07-20 Thread Krishnama Raju K (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishnama Raju K updated AMBARI-21538:
--
Summary: Zeppelin quick links and service check failing in SSL enabled 
environment.  (was: Zeppelin service check failing in SSL enabled environment.)

> Zeppelin quick links and service check failing in SSL enabled environment.
> --
>
> Key: AMBARI-21538
> URL: https://issues.apache.org/jira/browse/AMBARI-21538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Reporter: Krishnama Raju K
>  Labels: easyfix
>
> From Ambari documentation 
> (https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
>  we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 
> 2.6.1 is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
> observe that Zeppelin service check would fail. This may be due to 
> introduction of two distinct properties for a secure (SSL enabled) and non 
> secure links as described here, 
> https://issues.apache.org/jira/browse/ZEPPELIN-1321
> Zeppelin service check fails with below stack trace,
> {code:java}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 40, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 314, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 37, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 262, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o 
> /dev/null -w'%{http_code}' --negotiate -u: -k https://:9995 | grep 
> 200' returned 1.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21538) Zeppelin service check failing in SSL enabled environment.

2017-07-20 Thread Krishnama Raju K (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishnama Raju K updated AMBARI-21538:
--
Description: 
>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check would fail. This may be due to 
>introduction of two distinct properties for a secure (SSL enabled) and non 
>secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in service_check
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 72, in inner
result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 102, in checked_call
tries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o 
/dev/null -w'%{http_code}' --negotiate -u: -k https://:9995 | grep 
200' returned 1.
{code}


  was:
>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check and quick links would fail. This may be 
>due to introduction of two distinct properties for a secure (SSL enabled) and 
>non secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321


Zeppelin Quick links would point to 9995 port. But, the service would be 
listening on 8443 port.

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in service_check
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 72, in inner
result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 102, in checked_call
tries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 303, in _call
  

[jira] [Updated] (AMBARI-21538) Zeppelin service check failing in SSL enabled environment.

2017-07-20 Thread Krishnama Raju K (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishnama Raju K updated AMBARI-21538:
--
Summary: Zeppelin service check failing in SSL enabled environment.  (was: 
Zeppelin quicklink and service check failing in SSL enabled environment.)

> Zeppelin service check failing in SSL enabled environment.
> --
>
> Key: AMBARI-21538
> URL: https://issues.apache.org/jira/browse/AMBARI-21538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Reporter: Krishnama Raju K
>  Labels: easyfix
>
> From Ambari documentation 
> (https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
>  we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 
> 2.6.1 is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
> observe that Zeppelin service check and quick links would fail. This may be 
> due to introduction of two distinct properties for a secure (SSL enabled) and 
> non secure links as described here, 
> https://issues.apache.org/jira/browse/ZEPPELIN-1321
> Zeppelin Quick links would point to 9995 port. But, the service would be 
> listening on 8443 port.
> Zeppelin service check fails with below stack trace,
> {code:java}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 40, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 314, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 37, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 262, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o 
> /dev/null -w'%{http_code}' --negotiate -u: -k https://:9995 | grep 
> 200' returned 1.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21538) Zeppelin quicklink and service check failing in SSL enabled environment.

2017-07-20 Thread Krishnama Raju K (JIRA)
Krishnama Raju K created AMBARI-21538:
-

 Summary: Zeppelin quicklink and service check failing in SSL 
enabled environment.
 Key: AMBARI-21538
 URL: https://issues.apache.org/jira/browse/AMBARI-21538
 Project: Ambari
  Issue Type: Bug
  Components: ambari-admin
Reporter: Krishnama Raju K


>From Ambari documentation 
>(https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-ambari.html),
> we confirm HDP-2.6.1 compatibility with Ambari 2.5.0. However, when HDP 2.6.1 
>is deployed using Ambari 2.5.0.3 and upon enabling SSL for Zeppelin, we 
>observe that Zeppelin service check and quick links would fail. This may be 
>due to introduction of two distinct properties for a secure (SSL enabled) and 
>non secure links as described here, 
>https://issues.apache.org/jira/browse/ZEPPELIN-1321


Zeppelin Quick links would point to 9995 port. But, the service would be 
listening on 8443 port.

Zeppelin service check fails with below stack trace,


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 40, in 
ZeppelinServiceCheck().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 314, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
 line 37, in service_check
logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 72, in inner
result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 102, in checked_call
tries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o 
/dev/null -w'%{http_code}' --negotiate -u: -k https://:9995 | grep 
200' returned 1.
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21527) Restart of MR2 History Server failed due to wrong NameNode RPC address

2017-07-20 Thread Di Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Di Li updated AMBARI-21527:
---
Description: 
P.S
  This happens on NN restart (kerberos cluster) and remote DS restart (non 
secured cluster) as well.

Steps:

* Installed BI 4.2 cluster on Ambari 2.2 with Slider and services it required
* Upgraded Ambari to 2.5.2.0-146
* Registered HDP 2.6.1.0 repo, installed packages
* Restarted services that needed restart
* Ran service checks
* Started upgrade

Result: _Restarting History Server_ step failed with 

{noformat:title=errors-87.txt}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
 line 134, in 
HistoryServer().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 329, in execute
method(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 841, in restart
self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
  File 
"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
 line 85, in pre_upgrade_restart
copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, 
skip=params.sysprep_skip_copy_tarballs_hdfs)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py",
 line 267, in copy_to_hdfs
replace_existing_files=replace_existing_files,
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 560, in action_create_on_execute
self.action_delayed("create")
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 557, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 292, in action_delayed
self._create_resource()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 308, in _create_resource
self._create_file(self.main_resource.resource.target, 
source=self.main_resource.resource.source, mode=self.mode)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 423, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
assertable_result=False, file_to_put=source, **kwargs)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 204, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
'%{http_code}' -X PUT --data-binary 
@/usr/hdp/2.6.1.0-129/hadoop/mapreduce.tar.gz -H 'Content-Type: 
application/octet-stream' 
'http://c7301.ambari.apache.org:50070/webhdfs/v1/hdp/apps/2.6.1.0-129/mapreduce/mapreduce.tar.gz?op=CREATE=hdfs=True=444''
 returned status_code=403. 
{
  "RemoteException": {
"exception": "ConnectException", 
"javaClassName": "java.net.ConnectException", 
"message": "Call From c7301.ambari.apache.org/192.168.73.101 to 
c7301.ambari.apache.org:8020 failed on connection exception: 
java.net.ConnectException: Connection refused; For more details see:  
http://wiki.apache.org/hadoop/ConnectionRefused;
  }
}
{noformat}

{noformat:title=NameNode log, pre-upgrade restart}
2017-07-18 07:48:05,435 INFO  namenode.NameNode 
(NameNode.java:setClientNamenodeAddress(397)) - fs.defaultFS is 
hdfs://c7301.ambari.apache.org:8020
2017-07-18 07:48:05,436 INFO  namenode.NameNode 
(NameNode.java:setClientNamenodeAddress(417)) - Clients are to use 
c7301.ambari.apache.org:8020 to access this namenode/service.
2017-07-18 07:48:07,343 INFO  namenode.NameNode 
(NameNodeRpcServer.java:(342)) - RPC server is binding to 
c7301.ambari.apache.org:8020
2017-07-18 07:48:07,434 INFO  namenode.NameNode 
(NameNode.java:startCommonServices(695)) - NameNode RPC up at: 
c7301.ambari.apache.org/192.168.73.101:8020
{noformat}

{noformat:title=NameNode log, in-upgrade restart}
2017-07-18 09:03:42,336 INFO  namenode.NameNode 
(NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is 
hdfs://c7301.ambari.apache.org:8020
2017-07-18 09:03:42,337 INFO  namenode.NameNode 
(NameNode.java:setClientNamenodeAddress(470)) - Clients are to use 
c7301.ambari.apache.org:8020 to access this namenode/service.
2017-07-18 09:03:44,686 INFO  namenode.NameNode 

[jira] [Commented] (AMBARI-21527) Restart of MR2 History Server failed due to wrong NameNode RPC address

2017-07-20 Thread Di Li (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094889#comment-16094889
 ] 

Di Li commented on AMBARI-21527:


Hello Attila, 

Sorry that I took this JIRA for the time being. I looked at your patch and 
thought that it could be improved to handle both kerberos and non-secured cases 
that Tim and I hit. I also noticed that your patch does not update the upgrade 
pack xml for referencing the new config id.

I posted my patch and am creating a JIRA review and adding you  as the reviewer.

> Restart of MR2 History Server failed due to wrong NameNode RPC address
> --
>
> Key: AMBARI-21527
> URL: https://issues.apache.org/jira/browse/AMBARI-21527
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Di Li
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21527-HA_and_NonHA.patch, AMBARI-21527.patch
>
>
> Steps:
> * Installed BI 4.2 cluster on Ambari 2.2 with Slider and services it required
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.1.0 repo, installed packages
> * Restarted services that needed restart
> * Ran service checks
> * Started upgrade
> Result: _Restarting History Server_ step failed with 
> {noformat:title=errors-87.txt}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 134, in 
> HistoryServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 841, in restart
> self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 85, in pre_upgrade_restart
> copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, 
> skip=params.sysprep_skip_copy_tarballs_hdfs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py",
>  line 267, in copy_to_hdfs
> replace_existing_files=replace_existing_files,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 560, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 557, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 292, in action_delayed
> self._create_resource()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 308, in _create_resource
> self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 423, in _create_file
> self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 204, in run_command
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --data-binary 
> @/usr/hdp/2.6.1.0-129/hadoop/mapreduce.tar.gz -H 'Content-Type: 
> application/octet-stream' 
> 'http://c7301.ambari.apache.org:50070/webhdfs/v1/hdp/apps/2.6.1.0-129/mapreduce/mapreduce.tar.gz?op=CREATE=hdfs=True=444''
>  returned status_code=403. 
> {
>   "RemoteException": {
> "exception": "ConnectException", 
> "javaClassName": "java.net.ConnectException", 
> "message": "Call From c7301.ambari.apache.org/192.168.73.101 to 
> c7301.ambari.apache.org:8020 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused;
>   }
> }
> {noformat}
> {noformat:title=NameNode log, pre-upgrade restart}
> 2017-07-18 

[jira] [Assigned] (AMBARI-21527) Restart of MR2 History Server failed due to wrong NameNode RPC address

2017-07-20 Thread Di Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Di Li reassigned AMBARI-21527:
--

Assignee: Di Li  (was: Doroszlai, Attila)

> Restart of MR2 History Server failed due to wrong NameNode RPC address
> --
>
> Key: AMBARI-21527
> URL: https://issues.apache.org/jira/browse/AMBARI-21527
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Di Li
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21527-HA_and_NonHA.patch, AMBARI-21527.patch
>
>
> Steps:
> * Installed BI 4.2 cluster on Ambari 2.2 with Slider and services it required
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.1.0 repo, installed packages
> * Restarted services that needed restart
> * Ran service checks
> * Started upgrade
> Result: _Restarting History Server_ step failed with 
> {noformat:title=errors-87.txt}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 134, in 
> HistoryServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 841, in restart
> self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 85, in pre_upgrade_restart
> copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, 
> skip=params.sysprep_skip_copy_tarballs_hdfs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py",
>  line 267, in copy_to_hdfs
> replace_existing_files=replace_existing_files,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 560, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 557, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 292, in action_delayed
> self._create_resource()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 308, in _create_resource
> self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 423, in _create_file
> self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 204, in run_command
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --data-binary 
> @/usr/hdp/2.6.1.0-129/hadoop/mapreduce.tar.gz -H 'Content-Type: 
> application/octet-stream' 
> 'http://c7301.ambari.apache.org:50070/webhdfs/v1/hdp/apps/2.6.1.0-129/mapreduce/mapreduce.tar.gz?op=CREATE=hdfs=True=444''
>  returned status_code=403. 
> {
>   "RemoteException": {
> "exception": "ConnectException", 
> "javaClassName": "java.net.ConnectException", 
> "message": "Call From c7301.ambari.apache.org/192.168.73.101 to 
> c7301.ambari.apache.org:8020 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused;
>   }
> }
> {noformat}
> {noformat:title=NameNode log, pre-upgrade restart}
> 2017-07-18 07:48:05,435 INFO  namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(397)) - fs.defaultFS is 
> hdfs://c7301.ambari.apache.org:8020
> 2017-07-18 07:48:05,436 INFO  namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(417)) - Clients are to use 
> c7301.ambari.apache.org:8020 to access this namenode/service.
> 2017-07-18 07:48:07,343 INFO  

[jira] [Updated] (AMBARI-21527) Restart of MR2 History Server failed due to wrong NameNode RPC address

2017-07-20 Thread Di Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Di Li updated AMBARI-21527:
---
Attachment: AMBARI-21527-HA_and_NonHA.patch

This is a problem for both secured and non secured cluster because HDFS Py 
script looks for that property *first* and uses it if it exists. This logic and 
the fact that the property (seems unnecessarily) merged in during EU with 
"localhost" as the value was what caused the issues for Tim and I.

Both Tim and I hit it. He hit it on a kerberos  cluster for NN restart. I hit 
on a multi-node non secured cluster for remote DN restart.

> Restart of MR2 History Server failed due to wrong NameNode RPC address
> --
>
> Key: AMBARI-21527
> URL: https://issues.apache.org/jira/browse/AMBARI-21527
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Doroszlai, Attila
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21527-HA_and_NonHA.patch, AMBARI-21527.patch
>
>
> Steps:
> * Installed BI 4.2 cluster on Ambari 2.2 with Slider and services it required
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.1.0 repo, installed packages
> * Restarted services that needed restart
> * Ran service checks
> * Started upgrade
> Result: _Restarting History Server_ step failed with 
> {noformat:title=errors-87.txt}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 134, in 
> HistoryServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 841, in restart
> self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 85, in pre_upgrade_restart
> copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, 
> skip=params.sysprep_skip_copy_tarballs_hdfs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py",
>  line 267, in copy_to_hdfs
> replace_existing_files=replace_existing_files,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 560, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 557, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 292, in action_delayed
> self._create_resource()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 308, in _create_resource
> self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 423, in _create_file
> self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 204, in run_command
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --data-binary 
> @/usr/hdp/2.6.1.0-129/hadoop/mapreduce.tar.gz -H 'Content-Type: 
> application/octet-stream' 
> 'http://c7301.ambari.apache.org:50070/webhdfs/v1/hdp/apps/2.6.1.0-129/mapreduce/mapreduce.tar.gz?op=CREATE=hdfs=True=444''
>  returned status_code=403. 
> {
>   "RemoteException": {
> "exception": "ConnectException", 
> "javaClassName": "java.net.ConnectException", 
> "message": "Call From c7301.ambari.apache.org/192.168.73.101 to 
> c7301.ambari.apache.org:8020 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused;
>   }
> }
> 

[jira] [Commented] (AMBARI-21345) Add host doesn't fully add a node when include/exclude files are used

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094822#comment-16094822
 ] 

Hadoop QA commented on AMBARI-21345:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12878187/AMBARI-21345_addiotional.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
ambari-server:

  
org.apache.ambari.server.controller.AmbariManagementControllerTest

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11829//console

This message is automatically generated.

> Add host doesn't fully add a node when include/exclude files are used
> -
>
> Key: AMBARI-21345
> URL: https://issues.apache.org/jira/browse/AMBARI-21345
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Reporter: Paul Codding
>Assignee: Dmytro Sen
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21345_5.patch, AMBARI-21345_addiotional.patch
>
>
> When using dfs.include/dfs.exclude files for HDFS and 
> yarn.include/yarn.exclude for YARN, we need to ensure these files are updated 
> whenever a host is added or removed, and we should also make sure su -l hdfs 
> -c "hdfs dfsadmin -refreshNodes" for HDFS and su -l yarn -c "yarn rmadmin 
> -refreshNodes" for YARN is run after the host has been added and the 
> corresponding HDFS/YARN files are updated.
> Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21345) Add host doesn't fully add a node when include/exclude files are used

2017-07-20 Thread Dmytro Sen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Sen updated AMBARI-21345:

Attachment: AMBARI-21345_addiotional.patch

> Add host doesn't fully add a node when include/exclude files are used
> -
>
> Key: AMBARI-21345
> URL: https://issues.apache.org/jira/browse/AMBARI-21345
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Reporter: Paul Codding
>Assignee: Dmytro Sen
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21345_5.patch, AMBARI-21345_addiotional.patch
>
>
> When using dfs.include/dfs.exclude files for HDFS and 
> yarn.include/yarn.exclude for YARN, we need to ensure these files are updated 
> whenever a host is added or removed, and we should also make sure su -l hdfs 
> -c "hdfs dfsadmin -refreshNodes" for HDFS and su -l yarn -c "yarn rmadmin 
> -refreshNodes" for YARN is run after the host has been added and the 
> corresponding HDFS/YARN files are updated.
> Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21345) Add host doesn't fully add a node when include/exclude files are used

2017-07-20 Thread Dmytro Sen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Sen updated AMBARI-21345:

Status: Patch Available  (was: Reopened)

> Add host doesn't fully add a node when include/exclude files are used
> -
>
> Key: AMBARI-21345
> URL: https://issues.apache.org/jira/browse/AMBARI-21345
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Reporter: Paul Codding
>Assignee: Dmytro Sen
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21345_5.patch, AMBARI-21345_addiotional.patch
>
>
> When using dfs.include/dfs.exclude files for HDFS and 
> yarn.include/yarn.exclude for YARN, we need to ensure these files are updated 
> whenever a host is added or removed, and we should also make sure su -l hdfs 
> -c "hdfs dfsadmin -refreshNodes" for HDFS and su -l yarn -c "yarn rmadmin 
> -refreshNodes" for YARN is run after the host has been added and the 
> corresponding HDFS/YARN files are updated.
> Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (AMBARI-21345) Add host doesn't fully add a node when include/exclude files are used

2017-07-20 Thread Dmytro Sen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Sen reopened AMBARI-21345:
-

> Add host doesn't fully add a node when include/exclude files are used
> -
>
> Key: AMBARI-21345
> URL: https://issues.apache.org/jira/browse/AMBARI-21345
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Reporter: Paul Codding
>Assignee: Dmytro Sen
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21345_5.patch
>
>
> When using dfs.include/dfs.exclude files for HDFS and 
> yarn.include/yarn.exclude for YARN, we need to ensure these files are updated 
> whenever a host is added or removed, and we should also make sure su -l hdfs 
> -c "hdfs dfsadmin -refreshNodes" for HDFS and su -l yarn -c "yarn rmadmin 
> -refreshNodes" for YARN is run after the host has been added and the 
> corresponding HDFS/YARN files are updated.
> Options



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (AMBARI-21527) Restart of MR2 History Server failed due to wrong NameNode RPC address

2017-07-20 Thread Doroszlai, Attila (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned AMBARI-21527:
--

Assignee: Doroszlai, Attila  (was: Siddharth Wagle)

> Restart of MR2 History Server failed due to wrong NameNode RPC address
> --
>
> Key: AMBARI-21527
> URL: https://issues.apache.org/jira/browse/AMBARI-21527
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Siddharth Wagle
>Assignee: Doroszlai, Attila
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21527.patch
>
>
> Steps:
> * Installed BI 4.2 cluster on Ambari 2.2 with Slider and services it required
> * Upgraded Ambari to 2.5.2.0-146
> * Registered HDP 2.6.1.0 repo, installed packages
> * Restarted services that needed restart
> * Ran service checks
> * Started upgrade
> Result: _Restarting History Server_ step failed with 
> {noformat:title=errors-87.txt}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 134, in 
> HistoryServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 841, in restart
> self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py",
>  line 85, in pre_upgrade_restart
> copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, 
> skip=params.sysprep_skip_copy_tarballs_hdfs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py",
>  line 267, in copy_to_hdfs
> replace_existing_files=replace_existing_files,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 560, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 557, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 292, in action_delayed
> self._create_resource()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 308, in _create_resource
> self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 423, in _create_file
> self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 204, in run_command
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --data-binary 
> @/usr/hdp/2.6.1.0-129/hadoop/mapreduce.tar.gz -H 'Content-Type: 
> application/octet-stream' 
> 'http://c7301.ambari.apache.org:50070/webhdfs/v1/hdp/apps/2.6.1.0-129/mapreduce/mapreduce.tar.gz?op=CREATE=hdfs=True=444''
>  returned status_code=403. 
> {
>   "RemoteException": {
> "exception": "ConnectException", 
> "javaClassName": "java.net.ConnectException", 
> "message": "Call From c7301.ambari.apache.org/192.168.73.101 to 
> c7301.ambari.apache.org:8020 failed on connection exception: 
> java.net.ConnectException: Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused;
>   }
> }
> {noformat}
> {noformat:title=NameNode log, pre-upgrade restart}
> 2017-07-18 07:48:05,435 INFO  namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(397)) - fs.defaultFS is 
> hdfs://c7301.ambari.apache.org:8020
> 2017-07-18 07:48:05,436 INFO  namenode.NameNode 
> (NameNode.java:setClientNamenodeAddress(417)) - Clients are to use 
> c7301.ambari.apache.org:8020 to access this namenode/service.
> 2017-07-18 07:48:07,343 

[jira] [Commented] (AMBARI-21535) ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094725#comment-16094725
 ] 

Hudson commented on AMBARI-21535:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.5 #1726 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1726/])
AMBARI-21535. ACTIVITY_ANALYZER Install failed: Error: Unable to run the 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=86787c37f669440a85a0038fe834f267ad992b07])
* (edit) 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/files/changeToSecureUid.sh
* (edit) 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py


> ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script
> -
>
> Key: AMBARI-21535
> URL: https://issues.apache.org/jira/browse/AMBARI-21535
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21535.patch
>
>
> STR:
>   * Create ambari-qa and hbase user with UIDs less than 1000
>   * Navigate through UI install wizard. At customize servicespage Set the 
> checkbox for "Misc->Have Ambari manage UIDs" to true/checked so that after 
> deployment above created users will have UIDs >= 1000
>   * Go through install wizard to finish off deployment. But it fails at 
> Activity analyser install with the below error 
> 
> 
> {
>   "href" : 
> "http://172.27.25.210:8080/api/v1/clusters/cl1/requests/4/tasks/29;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "INSTALL",
> "command_detail" : "ACTIVITY_ANALYZER INSTALL",
> "end_time" : 1500427251810,
> "error_log" : "/var/lib/ambari-agent/data/errors-29.txt",
> "exit_code" : 1,
> "host_name" : "ctr-e134-1499953498516-19756-01-05.hwx.site",
> "id" : 29,
> "output_log" : "/var/lib/ambari-agent/data/output-29.txt",
> "request_id" : 4,
> "role" : "ACTIVITY_ANALYZER",
> "stage_id" : 0,
> "start_time" : 1500427242346,
> "status" : "FAILED",
> "stderr" : "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 35, in \nBeforeAnyHook().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 329, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 29, in hook\nsetup_users()\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 60, in setup_users\nset_uid(params.smoke_user, 
> params.smoke_user_dirs)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 149, in set_uid\nnot_if = format(\"(test $(id -u {user}) -gt 1000) 
> || ({ignore_groupsusers_create_str})\"))\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 262, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 72, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 102, in checked_call\ntries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 150, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 303, in _call\nraise ExecutionFailed(err_msg, code, out, 
> err)\nresource_management.core.exceptions.ExecutionFailed: Execution of 
> '/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
>  0' returned 1. Failed to find Uid between 1000 and 2000\nError: Error: 
> Unable to run the custom hook script ['/usr/bin/python', 
> 

[jira] [Commented] (AMBARI-21535) ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094722#comment-16094722
 ] 

Hudson commented on AMBARI-21535:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7791 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7791/])
AMBARI-21535. ACTIVITY_ANALYZER Install failed: Error: Unable to run the 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=9c451107f316dbfcc45f99d536d2a6d4a4d99249])
* (edit) 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py
* (edit) 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/files/changeToSecureUid.sh


> ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script
> -
>
> Key: AMBARI-21535
> URL: https://issues.apache.org/jira/browse/AMBARI-21535
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21535.patch
>
>
> STR:
>   * Create ambari-qa and hbase user with UIDs less than 1000
>   * Navigate through UI install wizard. At customize servicespage Set the 
> checkbox for "Misc->Have Ambari manage UIDs" to true/checked so that after 
> deployment above created users will have UIDs >= 1000
>   * Go through install wizard to finish off deployment. But it fails at 
> Activity analyser install with the below error 
> 
> 
> {
>   "href" : 
> "http://172.27.25.210:8080/api/v1/clusters/cl1/requests/4/tasks/29;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "INSTALL",
> "command_detail" : "ACTIVITY_ANALYZER INSTALL",
> "end_time" : 1500427251810,
> "error_log" : "/var/lib/ambari-agent/data/errors-29.txt",
> "exit_code" : 1,
> "host_name" : "ctr-e134-1499953498516-19756-01-05.hwx.site",
> "id" : 29,
> "output_log" : "/var/lib/ambari-agent/data/output-29.txt",
> "request_id" : 4,
> "role" : "ACTIVITY_ANALYZER",
> "stage_id" : 0,
> "start_time" : 1500427242346,
> "status" : "FAILED",
> "stderr" : "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 35, in \nBeforeAnyHook().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 329, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 29, in hook\nsetup_users()\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 60, in setup_users\nset_uid(params.smoke_user, 
> params.smoke_user_dirs)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 149, in set_uid\nnot_if = format(\"(test $(id -u {user}) -gt 1000) 
> || ({ignore_groupsusers_create_str})\"))\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 262, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 72, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 102, in checked_call\ntries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 150, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 303, in _call\nraise ExecutionFailed(err_msg, code, out, 
> err)\nresource_management.core.exceptions.ExecutionFailed: Execution of 
> '/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
>  0' returned 1. Failed to find Uid between 1000 and 2000\nError: Error: 
> Unable to run the custom hook script ['/usr/bin/python', 
> 

[jira] [Commented] (AMBARI-21535) ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094685#comment-16094685
 ] 

Hadoop QA commented on AMBARI-21535:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12878171/AMBARI-21535.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-server.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11828//console

This message is automatically generated.

> ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script
> -
>
> Key: AMBARI-21535
> URL: https://issues.apache.org/jira/browse/AMBARI-21535
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21535.patch
>
>
> STR:
>   * Create ambari-qa and hbase user with UIDs less than 1000
>   * Navigate through UI install wizard. At customize servicespage Set the 
> checkbox for "Misc->Have Ambari manage UIDs" to true/checked so that after 
> deployment above created users will have UIDs >= 1000
>   * Go through install wizard to finish off deployment. But it fails at 
> Activity analyser install with the below error 
> 
> 
> {
>   "href" : 
> "http://172.27.25.210:8080/api/v1/clusters/cl1/requests/4/tasks/29;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "INSTALL",
> "command_detail" : "ACTIVITY_ANALYZER INSTALL",
> "end_time" : 1500427251810,
> "error_log" : "/var/lib/ambari-agent/data/errors-29.txt",
> "exit_code" : 1,
> "host_name" : "ctr-e134-1499953498516-19756-01-05.hwx.site",
> "id" : 29,
> "output_log" : "/var/lib/ambari-agent/data/output-29.txt",
> "request_id" : 4,
> "role" : "ACTIVITY_ANALYZER",
> "stage_id" : 0,
> "start_time" : 1500427242346,
> "status" : "FAILED",
> "stderr" : "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 35, in \nBeforeAnyHook().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 329, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 29, in hook\nsetup_users()\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 60, in setup_users\nset_uid(params.smoke_user, 
> params.smoke_user_dirs)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 149, in set_uid\nnot_if = format(\"(test $(id -u {user}) -gt 1000) 
> || ({ignore_groupsusers_create_str})\"))\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 262, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 72, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 102, in checked_call\ntries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 150, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 

[jira] [Updated] (AMBARI-21535) ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script

2017-07-20 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21535:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.5

> ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script
> -
>
> Key: AMBARI-21535
> URL: https://issues.apache.org/jira/browse/AMBARI-21535
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21535.patch
>
>
> STR:
>   * Create ambari-qa and hbase user with UIDs less than 1000
>   * Navigate through UI install wizard. At customize servicespage Set the 
> checkbox for "Misc->Have Ambari manage UIDs" to true/checked so that after 
> deployment above created users will have UIDs >= 1000
>   * Go through install wizard to finish off deployment. But it fails at 
> Activity analyser install with the below error 
> 
> 
> {
>   "href" : 
> "http://172.27.25.210:8080/api/v1/clusters/cl1/requests/4/tasks/29;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "INSTALL",
> "command_detail" : "ACTIVITY_ANALYZER INSTALL",
> "end_time" : 1500427251810,
> "error_log" : "/var/lib/ambari-agent/data/errors-29.txt",
> "exit_code" : 1,
> "host_name" : "ctr-e134-1499953498516-19756-01-05.hwx.site",
> "id" : 29,
> "output_log" : "/var/lib/ambari-agent/data/output-29.txt",
> "request_id" : 4,
> "role" : "ACTIVITY_ANALYZER",
> "stage_id" : 0,
> "start_time" : 1500427242346,
> "status" : "FAILED",
> "stderr" : "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 35, in \nBeforeAnyHook().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 329, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 29, in hook\nsetup_users()\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 60, in setup_users\nset_uid(params.smoke_user, 
> params.smoke_user_dirs)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 149, in set_uid\nnot_if = format(\"(test $(id -u {user}) -gt 1000) 
> || ({ignore_groupsusers_create_str})\"))\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 262, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 72, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 102, in checked_call\ntries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 150, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 303, in _call\nraise ExecutionFailed(err_msg, code, out, 
> err)\nresource_management.core.exceptions.ExecutionFailed: Execution of 
> '/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
>  0' returned 1. Failed to find Uid between 1000 and 2000\nError: Error: 
> Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-29.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-29.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']",
> "stdout" : "2017-07-19 01:20:49,237 - Stack Feature Version Info: 
> Cluster Stack=2.6, Cluster Current Version=None, Command Stack=None, Command 
> Version=None-> 2.6\n2017-07-19 

[jira] [Created] (AMBARI-21537) Backport AMBARI-12556 UI Work For Patch/Service Upgrades

2017-07-20 Thread Jonathan Hurley (JIRA)
Jonathan Hurley created AMBARI-21537:


 Summary: Backport AMBARI-12556 UI Work For Patch/Service Upgrades
 Key: AMBARI-21537
 URL: https://issues.apache.org/jira/browse/AMBARI-21537
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Reporter: Jonathan Hurley
Assignee: Antonenko Alexander
Priority: Blocker


As part of the effort for AMBARI-21450 (patch upgrades in Ambari 2.6), the 
following Jiras should be backported into {{branch-feature-AMBARI-21450}}

{code}
AMBARI-21344. Add Services Using Repository ID (alexantonenko)
AMBARI-21386. After install packages, upgrade button does not work 
(alexantonenko)
AMBARI-21102. To/From Version Information is Incorrect When Looking at Prior 
Upgrades (alexantonenko)
AMBARI-21046. UI: Upgrades should be started using repo_version_ids instead of 
version strings (alexantonenko)
AMBARI-21103. Creating a Downgrade From the Web Client Is Passing an 
Unsupported Property (alexantonenko)
AMBARI-21072. Removal of from/to Upgrade Versions in Web Client (alexantonenko)
AMBARI-21021. Service-level repositories should indicate 'Service' on the UI 
(alexantonenko)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21536) Trigger alert check via Ambari UI

2017-07-20 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated AMBARI-21536:
-
Description: 
Feature Request add UI ability to trigger an alert check.

For example, Ranger admin password check is critical showing out of sync but 
since the check is run only every 30 mins it would be nice to be able to 
instantly trigger the alert check after fixing the issue to test and clear it 
quicker rather than waiting half an hour. Running a Ranger service check 
doesn't seem to do this, it just runs the health check for the login page.

It appears this can be triggered via an API call to the alert definition with 
run_now=true so this simply needs be to exposed via the UI.

  was:
Feature Request add UI ability to trigger an alert check.

For example, Ranger admin password check is critical showing out of sync but 
since the check is run only every 30 mins it would be nice to be able to 
instantly trigger the alert check after fixing the issue to test and clear it 
quicker rather than waiting half an hour. Running a Ranger service check 
doesn't seem to do this, it just runs the health check for the login page.

It appears this can be triggered via an API call to the alert definition with 
run_now=true so this simply needs be to expose via the UI.


> Trigger alert check via Ambari UI
> -
>
> Key: AMBARI-21536
> URL: https://issues.apache.org/jira/browse/AMBARI-21536
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-web
>Affects Versions: 2.5.1
> Environment: HDF 3.0
>Reporter: Hari Sekhon
>
> Feature Request add UI ability to trigger an alert check.
> For example, Ranger admin password check is critical showing out of sync but 
> since the check is run only every 30 mins it would be nice to be able to 
> instantly trigger the alert check after fixing the issue to test and clear it 
> quicker rather than waiting half an hour. Running a Ranger service check 
> doesn't seem to do this, it just runs the health check for the login page.
> It appears this can be triggered via an API call to the alert definition with 
> run_now=true so this simply needs be to exposed via the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21536) Trigger alert check via Ambari UI

2017-07-20 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated AMBARI-21536:
-
Priority: Major  (was: Minor)

> Trigger alert check via Ambari UI
> -
>
> Key: AMBARI-21536
> URL: https://issues.apache.org/jira/browse/AMBARI-21536
> Project: Ambari
>  Issue Type: New Feature
>  Components: ambari-web
>Affects Versions: 2.5.1
> Environment: HDF 3.0
>Reporter: Hari Sekhon
>
> Feature Request add UI ability to trigger an alert check.
> For example, Ranger admin password check is critical showing out of sync but 
> since the check is run only every 30 mins it would be nice to be able to 
> instantly trigger the alert check after fixing the issue to test and clear it 
> quicker rather than waiting half an hour. Running a Ranger service check 
> doesn't seem to do this, it just runs the health check for the login page.
> It appears this can be triggered via an API call to the alert definition with 
> run_now=true so this simply needs be to expose via the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21536) Trigger alert check via Ambari UI

2017-07-20 Thread Hari Sekhon (JIRA)
Hari Sekhon created AMBARI-21536:


 Summary: Trigger alert check via Ambari UI
 Key: AMBARI-21536
 URL: https://issues.apache.org/jira/browse/AMBARI-21536
 Project: Ambari
  Issue Type: New Feature
  Components: ambari-web
Affects Versions: 2.5.1
 Environment: HDF 3.0
Reporter: Hari Sekhon
Priority: Minor


Feature Request add UI ability to trigger an alert check.

For example, Ranger admin password check is critical showing out of sync but 
since the check is run only every 30 mins it would be nice to be able to 
instantly trigger the alert check after fixing the issue to test and clear it 
quicker rather than waiting half an hour. Running a Ranger service check 
doesn't seem to do this, it just runs the health check for the login page.

It appears this can be triggered via an API call to the alert definition with 
run_now=true so this simply needs be to expose via the UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21530) Service Checks During Upgrades Should Use Desired Stack

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094648#comment-16094648
 ] 

Hudson commented on AMBARI-21530:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1725 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1725/])
AMBARI-21530 - Service Checks During Upgrades Should Use Desired Stack 
(jhurley: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=282c4e213056d351331adac498d4655b8ebc9251])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariCustomCommandExecutionHelper.java
* (edit) 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/controller/internal/UpgradeResourceProvider.java
* (edit) 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py
* (edit) 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py
* (edit) ambari-server/src/test/python/TestStackFeature.py


> Service Checks During Upgrades Should Use Desired Stack
> ---
>
> Key: AMBARI-21530
> URL: https://issues.apache.org/jira/browse/AMBARI-21530
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: 2.5.2
>
> Attachments: AMBARI-21530.patch
>
>
> During an upgrade from BI 4.2 to HDP 2.6, some service checks were failing. 
> This is because the service checks were having their hooks/service folders 
> overwritten by some of the scheduler framework. At the time of orchestration, 
> the cluster desired ID was still BI but the effective ID used for the upgrade 
> was HDP (which was being clobbered)
> Exception on running YARN service check:
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 91, in 
> ServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 54, in service_check
> user=params.smokeuser,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell*.jar' returned 
> 1. 17/07/19 19:34:40 INFO distributedshell.Client: Initializing Client
> 17/07/19 19:34:40 INFO distributedshell.Client: Running Client
> 17/07/19 19:34:40 INFO client.RMProxy: Connecting to ResourceManager at 
> sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:8050
> 17/07/19 19:34:40 INFO client.AHSProxy: Connecting to Application History 
> server at sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:10200
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=1
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster node info from ASM
> 17/07/19 19:34:40 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=sid-bigi-3.c.pramod-thangali.internal:45454, 
> nodeAddresssid-bigi-3.c.pramod-thangali.internal:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 17/07/19 19:34:40 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=ADMINISTER_QUEUE
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, 

[jira] [Created] (AMBARI-21535) ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script

2017-07-20 Thread Andrew Onischuk (JIRA)
Andrew Onischuk created AMBARI-21535:


 Summary: ACTIVITY_ANALYZER Install failed: Error: Unable to run 
the custom hook script
 Key: AMBARI-21535
 URL: https://issues.apache.org/jira/browse/AMBARI-21535
 Project: Ambari
  Issue Type: Bug
Reporter: Andrew Onischuk
Assignee: Andrew Onischuk
 Fix For: 2.5.2


STR:

  * Create ambari-qa and hbase user with UIDs less than 1000
  * Navigate through UI install wizard. At customize servicespage Set the 
checkbox for "Misc->Have Ambari manage UIDs" to true/checked so that after 
deployment above created users will have UIDs >= 1000
  * Go through install wizard to finish off deployment. But it fails at 
Activity analyser install with the below error 


{
  "href" : 
"http://172.27.25.210:8080/api/v1/clusters/cl1/requests/4/tasks/29;,
  "Tasks" : {
"attempt_cnt" : 1,
"cluster_name" : "cl1",
"command" : "INSTALL",
"command_detail" : "ACTIVITY_ANALYZER INSTALL",
"end_time" : 1500427251810,
"error_log" : "/var/lib/ambari-agent/data/errors-29.txt",
"exit_code" : 1,
"host_name" : "ctr-e134-1499953498516-19756-01-05.hwx.site",
"id" : 29,
"output_log" : "/var/lib/ambari-agent/data/output-29.txt",
"request_id" : 4,
"role" : "ACTIVITY_ANALYZER",
"stage_id" : 0,
"start_time" : 1500427242346,
"status" : "FAILED",
"stderr" : "Traceback (most recent call last):\n  File 
\"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
 line 35, in \nBeforeAnyHook().execute()\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
 line 329, in execute\nmethod(env)\n  File 
\"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
 line 29, in hook\nsetup_users()\n  File 
\"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
 line 60, in setup_users\nset_uid(params.smoke_user, 
params.smoke_user_dirs)\n  File 
\"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
 line 149, in set_uid\nnot_if = format(\"(test $(id -u {user}) -gt 1000) || 
({ignore_groupsusers_create_str})\"))\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
155, in __init__\nself.env.run()\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
line 160, in run\nself.run_action(resource, action)\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
line 124, in run_action\nprovider_action()\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
 line 262, in action_run\ntries=self.resource.tries, 
try_sleep=self.resource.try_sleep)\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
72, in inner\nresult = function(command, **kwargs)\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
102, in checked_call\ntries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
150, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
\"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
303, in _call\nraise ExecutionFailed(err_msg, code, out, 
err)\nresource_management.core.exceptions.ExecutionFailed: Execution of 
'/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa 
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
 0' returned 1. Failed to find Uid between 1000 and 2000\nError: Error: Unable 
to run the custom hook script ['/usr/bin/python', 
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
 'ANY', '/var/lib/ambari-agent/data/command-29.json', 
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
'/var/lib/ambari-agent/data/structured-out-29.json', 'INFO', 
'/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']",
"stdout" : "2017-07-19 01:20:49,237 - Stack Feature Version Info: 
Cluster Stack=2.6, Cluster Current Version=None, Command Stack=None, Command 
Version=None-> 2.6\n2017-07-19 01:20:49,286 - Using hadoop conf dir: 
/usr/hdp/current/hadoop-client/conf\nUser Group mapping (user_group) is missing 
in the hostLevelParams\n2017-07-19 01:20:49,288 - Group['hadoop'] 
{}\n2017-07-19 01:20:49,292 - Group['users'] {}\n2017-07-19 01:20:49,293 - 
File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': 
StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2017-07-19 01:20:49,297 - 
Writing File['/var/lib/ambari-agent/tmp/changeUid.sh'] because it doesn't 

[jira] [Updated] (AMBARI-21535) ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script

2017-07-20 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21535:
-
Status: Patch Available  (was: Open)

> ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script
> -
>
> Key: AMBARI-21535
> URL: https://issues.apache.org/jira/browse/AMBARI-21535
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21535.patch
>
>
> STR:
>   * Create ambari-qa and hbase user with UIDs less than 1000
>   * Navigate through UI install wizard. At customize servicespage Set the 
> checkbox for "Misc->Have Ambari manage UIDs" to true/checked so that after 
> deployment above created users will have UIDs >= 1000
>   * Go through install wizard to finish off deployment. But it fails at 
> Activity analyser install with the below error 
> 
> 
> {
>   "href" : 
> "http://172.27.25.210:8080/api/v1/clusters/cl1/requests/4/tasks/29;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "INSTALL",
> "command_detail" : "ACTIVITY_ANALYZER INSTALL",
> "end_time" : 1500427251810,
> "error_log" : "/var/lib/ambari-agent/data/errors-29.txt",
> "exit_code" : 1,
> "host_name" : "ctr-e134-1499953498516-19756-01-05.hwx.site",
> "id" : 29,
> "output_log" : "/var/lib/ambari-agent/data/output-29.txt",
> "request_id" : 4,
> "role" : "ACTIVITY_ANALYZER",
> "stage_id" : 0,
> "start_time" : 1500427242346,
> "status" : "FAILED",
> "stderr" : "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 35, in \nBeforeAnyHook().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 329, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 29, in hook\nsetup_users()\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 60, in setup_users\nset_uid(params.smoke_user, 
> params.smoke_user_dirs)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 149, in set_uid\nnot_if = format(\"(test $(id -u {user}) -gt 1000) 
> || ({ignore_groupsusers_create_str})\"))\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 262, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 72, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 102, in checked_call\ntries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 150, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 303, in _call\nraise ExecutionFailed(err_msg, code, out, 
> err)\nresource_management.core.exceptions.ExecutionFailed: Execution of 
> '/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
>  0' returned 1. Failed to find Uid between 1000 and 2000\nError: Error: 
> Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-29.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-29.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']",
> "stdout" : "2017-07-19 01:20:49,237 - Stack Feature Version Info: 
> Cluster Stack=2.6, Cluster Current Version=None, Command Stack=None, Command 
> Version=None-> 2.6\n2017-07-19 01:20:49,286 - Using hadoop conf dir: 
> 

[jira] [Updated] (AMBARI-21535) ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script

2017-07-20 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21535:
-
Attachment: AMBARI-21535.patch

> ACTIVITY_ANALYZER Install failed: Error: Unable to run the custom hook script
> -
>
> Key: AMBARI-21535
> URL: https://issues.apache.org/jira/browse/AMBARI-21535
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21535.patch
>
>
> STR:
>   * Create ambari-qa and hbase user with UIDs less than 1000
>   * Navigate through UI install wizard. At customize servicespage Set the 
> checkbox for "Misc->Have Ambari manage UIDs" to true/checked so that after 
> deployment above created users will have UIDs >= 1000
>   * Go through install wizard to finish off deployment. But it fails at 
> Activity analyser install with the below error 
> 
> 
> {
>   "href" : 
> "http://172.27.25.210:8080/api/v1/clusters/cl1/requests/4/tasks/29;,
>   "Tasks" : {
> "attempt_cnt" : 1,
> "cluster_name" : "cl1",
> "command" : "INSTALL",
> "command_detail" : "ACTIVITY_ANALYZER INSTALL",
> "end_time" : 1500427251810,
> "error_log" : "/var/lib/ambari-agent/data/errors-29.txt",
> "exit_code" : 1,
> "host_name" : "ctr-e134-1499953498516-19756-01-05.hwx.site",
> "id" : 29,
> "output_log" : "/var/lib/ambari-agent/data/output-29.txt",
> "request_id" : 4,
> "role" : "ACTIVITY_ANALYZER",
> "stage_id" : 0,
> "start_time" : 1500427242346,
> "status" : "FAILED",
> "stderr" : "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 35, in \nBeforeAnyHook().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 329, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py\",
>  line 29, in hook\nsetup_users()\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 60, in setup_users\nset_uid(params.smoke_user, 
> params.smoke_user_dirs)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py\",
>  line 149, in set_uid\nnot_if = format(\"(test $(id -u {user}) -gt 1000) 
> || ({ignore_groupsusers_create_str})\"))\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 262, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 72, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 102, in checked_call\ntries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 150, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 303, in _call\nraise ExecutionFailed(err_msg, code, out, 
> err)\nresource_management.core.exceptions.ExecutionFailed: Execution of 
> '/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
>  0' returned 1. Failed to find Uid between 1000 and 2000\nError: Error: 
> Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-29.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-29.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']",
> "stdout" : "2017-07-19 01:20:49,237 - Stack Feature Version Info: 
> Cluster Stack=2.6, Cluster Current Version=None, Command Stack=None, Command 
> Version=None-> 2.6\n2017-07-19 01:20:49,286 - Using hadoop conf dir: 
> 

[jira] [Commented] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094605#comment-16094605
 ] 

Hudson commented on AMBARI-21534:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1724 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1724/])
AMBARI-21534 Spinner doesnt disappear and hosts not loading even after 5 
(atkach: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=4078c48679aaea2f4d8636d141c2c1ca8cb68145])
* (edit) ambari-web/app/mappers/hosts_mapper.js


> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21534.patch
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21530) Service Checks During Upgrades Should Use Desired Stack

2017-07-20 Thread Jonathan Hurley (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley updated AMBARI-21530:
-
Status: Patch Available  (was: Open)

> Service Checks During Upgrades Should Use Desired Stack
> ---
>
> Key: AMBARI-21530
> URL: https://issues.apache.org/jira/browse/AMBARI-21530
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: 2.5.2
>
> Attachments: AMBARI-21530.patch
>
>
> During an upgrade from BI 4.2 to HDP 2.6, some service checks were failing. 
> This is because the service checks were having their hooks/service folders 
> overwritten by some of the scheduler framework. At the time of orchestration, 
> the cluster desired ID was still BI but the effective ID used for the upgrade 
> was HDP (which was being clobbered)
> Exception on running YARN service check:
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 91, in 
> ServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 54, in service_check
> user=params.smokeuser,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell*.jar' returned 
> 1. 17/07/19 19:34:40 INFO distributedshell.Client: Initializing Client
> 17/07/19 19:34:40 INFO distributedshell.Client: Running Client
> 17/07/19 19:34:40 INFO client.RMProxy: Connecting to ResourceManager at 
> sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:8050
> 17/07/19 19:34:40 INFO client.AHSProxy: Connecting to Application History 
> server at sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:10200
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=1
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster node info from ASM
> 17/07/19 19:34:40 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=sid-bigi-3.c.pramod-thangali.internal:45454, 
> nodeAddresssid-bigi-3.c.pramod-thangali.internal:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 17/07/19 19:34:40 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=ADMINISTER_QUEUE
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=ADMINISTER_QUEUE
> 17/07/19 19:34:40 INFO distributedshell.Client: Max mem capability of 
> resources in this cluster 10240
> 17/07/19 19:34:40 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 3
> 17/07/19 19:34:40 INFO distributedshell.Client: Copy App Master jar from 
> local filesystem and add to local environment
> 17/07/19 19:34:41 FATAL distributedshell.Client: Error running Client
> java.io.FileNotFoundException: File 
> /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell*.jar does not 
> exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
>   at 
> 

[jira] [Updated] (AMBARI-21530) Service Checks During Upgrades Should Use Desired Stack

2017-07-20 Thread Jonathan Hurley (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley updated AMBARI-21530:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Service Checks During Upgrades Should Use Desired Stack
> ---
>
> Key: AMBARI-21530
> URL: https://issues.apache.org/jira/browse/AMBARI-21530
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: 2.5.2
>
> Attachments: AMBARI-21530.patch
>
>
> During an upgrade from BI 4.2 to HDP 2.6, some service checks were failing. 
> This is because the service checks were having their hooks/service folders 
> overwritten by some of the scheduler framework. At the time of orchestration, 
> the cluster desired ID was still BI but the effective ID used for the upgrade 
> was HDP (which was being clobbered)
> Exception on running YARN service check:
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 91, in 
> ServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 54, in service_check
> user=params.smokeuser,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell*.jar' returned 
> 1. 17/07/19 19:34:40 INFO distributedshell.Client: Initializing Client
> 17/07/19 19:34:40 INFO distributedshell.Client: Running Client
> 17/07/19 19:34:40 INFO client.RMProxy: Connecting to ResourceManager at 
> sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:8050
> 17/07/19 19:34:40 INFO client.AHSProxy: Connecting to Application History 
> server at sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:10200
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=1
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster node info from ASM
> 17/07/19 19:34:40 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=sid-bigi-3.c.pramod-thangali.internal:45454, 
> nodeAddresssid-bigi-3.c.pramod-thangali.internal:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 17/07/19 19:34:40 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=ADMINISTER_QUEUE
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=ADMINISTER_QUEUE
> 17/07/19 19:34:40 INFO distributedshell.Client: Max mem capability of 
> resources in this cluster 10240
> 17/07/19 19:34:40 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 3
> 17/07/19 19:34:40 INFO distributedshell.Client: Copy App Master jar from 
> local filesystem and add to local environment
> 17/07/19 19:34:41 FATAL distributedshell.Client: Error running Client
> java.io.FileNotFoundException: File 
> /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell*.jar does not 
> exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
>   at 
> 

[jira] [Updated] (AMBARI-21530) Service Checks During Upgrades Should Use Desired Stack

2017-07-20 Thread Jonathan Hurley (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley updated AMBARI-21530:
-
Attachment: AMBARI-21530.patch

> Service Checks During Upgrades Should Use Desired Stack
> ---
>
> Key: AMBARI-21530
> URL: https://issues.apache.org/jira/browse/AMBARI-21530
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: 2.5.2
>
> Attachments: AMBARI-21530.patch
>
>
> During an upgrade from BI 4.2 to HDP 2.6, some service checks were failing. 
> This is because the service checks were having their hooks/service folders 
> overwritten by some of the scheduler framework. At the time of orchestration, 
> the cluster desired ID was still BI but the effective ID used for the upgrade 
> was HDP (which was being clobbered)
> Exception on running YARN service check:
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 91, in 
> ServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/YARN/package/scripts/service_check.py",
>  line 54, in service_check
> user=params.smokeuser,
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell*.jar' returned 
> 1. 17/07/19 19:34:40 INFO distributedshell.Client: Initializing Client
> 17/07/19 19:34:40 INFO distributedshell.Client: Running Client
> 17/07/19 19:34:40 INFO client.RMProxy: Connecting to ResourceManager at 
> sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:8050
> 17/07/19 19:34:40 INFO client.AHSProxy: Connecting to Application History 
> server at sid-bigi-2.c.pramod-thangali.internal/10.240.0.47:10200
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=1
> 17/07/19 19:34:40 INFO distributedshell.Client: Got Cluster node info from ASM
> 17/07/19 19:34:40 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=sid-bigi-3.c.pramod-thangali.internal:45454, 
> nodeAddresssid-bigi-3.c.pramod-thangali.internal:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 17/07/19 19:34:40 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=ADMINISTER_QUEUE
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 17/07/19 19:34:40 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=ADMINISTER_QUEUE
> 17/07/19 19:34:40 INFO distributedshell.Client: Max mem capability of 
> resources in this cluster 10240
> 17/07/19 19:34:40 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 3
> 17/07/19 19:34:40 INFO distributedshell.Client: Copy App Master jar from 
> local filesystem and add to local environment
> 17/07/19 19:34:41 FATAL distributedshell.Client: Error running Client
> java.io.FileNotFoundException: File 
> /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell*.jar does not 
> exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
>   at 
> 

[jira] [Updated] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21534:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21534.patch
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094577#comment-16094577
 ] 

Andrii Tkach commented on AMBARI-21534:
---

committed to branch-2.5

> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21534.patch
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21534:
--
Attachment: AMBARI-21534.patch

> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21534.patch
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21534:
--
Attachment: (was: AMBARI-21534.patch)

> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21406) Refresh configurations without restarting components

2017-07-20 Thread Sandor Magyari (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandor Magyari updated AMBARI-21406:

Summary: Refresh configurations without restarting components  (was: 
Refresh configurations without restart command)

> Refresh configurations without restarting components
> 
>
> Key: AMBARI-21406
> URL: https://issues.apache.org/jira/browse/AMBARI-21406
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-agent, ambari-server
>Reporter: Sandor Magyari
>Assignee: Sandor Magyari
> Fix For: 3.0.0
>
> Attachments: AMBARI-21406-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Aleksandr Kovalenko (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094570#comment-16094570
 ] 

Aleksandr Kovalenko commented on AMBARI-21534:
--

+1 for the patch

> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21534.patch
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21533) Text change for bypassing prechecks

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094567#comment-16094567
 ] 

Hudson commented on AMBARI-21533:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1723 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1723/])
AMBARI-21533 Text change for bypassing prechecks. (atkach) (atkach: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=f9ce03e161c0b5c1b16c6d5d337913b743182276])
* (edit) ambari-web/app/messages.js


> Text change for bypassing prechecks
> ---
>
> Key: AMBARI-21533
> URL: https://issues.apache.org/jira/browse/AMBARI-21533
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 2.5.2
>
> Attachments: AMBARI-21533.patch
>
>
> make the following change:
> {quote}
> "Bypassed errors, proceed at your own risk" to "Upgrade Checks Bypassed"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21498) DB consistency checker throws errors for missing 'product-info' configs after Ambari upgrade

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094564#comment-16094564
 ] 

Hudson commented on AMBARI-21498:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #7790 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7790/])
AMBARI-21498. DB consistency checker throws errors for missing (dlysnichenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=d999343f97fe4a92625327b6f6e48c0c7c3f3ecf])
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog252.java


> DB consistency checker throws errors for missing 'product-info' configs after 
> Ambari upgrade
> 
>
> Key: AMBARI-21498
> URL: https://issues.apache.org/jira/browse/AMBARI-21498
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.5.3
>
> Attachments: AMBARI-21498.patch
>
>
> DB consistency checker throws errors for missing 'product-info' configs after 
> Ambari upgrade
> AMBARI-21364 fixed the missing 'parquet-logging' but 'product-info' is still 
> missing for Smartsense
> STR
> Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 2.6.1.0-129
> Upgrade Ambari to 2.5.2.0-105
> Run "ambari-server start"
> Live openstack cluster : 172.22.124.150 (Ambari Server)
> ambari-server-check-database.log
> {code}
> 2017-07-07 18:09:05,678 ERROR - Required config(s): product-info is(are) not 
> available for service SMARTSENSE with service config version 1 in cluster cl1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21531) Client component restart fails after Ambari upgrade while running custom hook script on Suse 11

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094563#comment-16094563
 ] 

Hudson commented on AMBARI-21531:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #7790 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7790/])
AMBARI-21531. Client component restart fails after Ambari upgrade while 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=8c15965e090c1666702d08b860a796015c79f679])
* (edit) 
ambari-server/src/test/python/stacks/2.0.6/hooks/before-ANY/test_before_any.py
* (edit) ambari-common/src/main/python/resource_management/core/base.py
* (edit) 
ambari-common/src/main/python/resource_management/core/providers/accounts.py
* (edit) 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py
* (edit) 
ambari-common/src/main/python/resource_management/core/resources/accounts.py
* (edit) ambari-agent/src/test/python/resource_management/TestUserResource.py


> Client component restart fails after Ambari upgrade while running custom hook 
> script on Suse 11
> ---
>
> Key: AMBARI-21531
> URL: https://issues.apache.org/jira/browse/AMBARI-21531
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21531.patch
>
>
> Seen in two cluster with Suse 11 SP4 OS
> **STR**
>   1. Deployed cluster with Ambari version: 2.4.2.0-136 and HDP version: 
> 2.5.3.0-37 (secure cluster, wire encryption enabled one cluster, disabled on 
> second cluster)
>   2. Upgrade Ambari to 2.5.2.0-147 (hash: 
> be3a875972224d7eb420c783a9f2cbdc7157)
>   3. Regenerate keytabs post upgrade and then try to restart all services
> **Result:**  
> Observed errors at start of Falcon, HBase, Atlas clients:
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 35, in 
> BeforeAnyHook().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 29, in hook
> setup_users()
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py",
>  line 51, in setup_users
> groups = params.user_to_groups_dict[user],
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/accounts.py",
>  line 82, in action_create
> shell.checked_call(command, sudo=True)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 72, in inner
> result = function(command, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 
> 'usermod -u 1002 -G hadoop,hadoop -g hadoop hive' returned 11. usermod: 
> `hadoop' is primary group name.
> usermod: `hadoop' is primary group name.
> usermod: UID 1002 is not unique.
> Error: Error: Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Error: Error: Unable to run 
> the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py',
>  'START', '/var/lib/ambari-agent/data/command-864.json', 
> 

[jira] [Commented] (AMBARI-21532) Namenode restart - PID file delete happens before the call to check status

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094545#comment-16094545
 ] 

Hadoop QA commented on AMBARI-21532:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12878143/AMBARI-21532.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-server.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11827//console

This message is automatically generated.

> Namenode restart - PID file delete happens before the call to check status
> --
>
> Key: AMBARI-21532
> URL: https://issues.apache.org/jira/browse/AMBARI-21532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21532.patch
>
>
> PID file delete happens before the call to check status.
> {code}
> ...
> 2017-07-06 00:03:21,004 - 
> File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete']}
> 2017-07-06 00:05:21,103 - Waiting for actual component stop
> 2017-07-06 00:05:21,104 - Pid file 
> /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid is empty or does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21531) Client component restart fails after Ambari upgrade while running custom hook script on Suse 11

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094532#comment-16094532
 ] 

Hudson commented on AMBARI-21531:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1722 (See 
[https://builds.apache.org/job/Ambari-branch-2.5/1722/])
AMBARI-21531. Client component restart fails after Ambari upgrade while 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=9ff3d66a683e45b2257584bb65425fae255e5087])
* (edit) ambari-agent/src/test/python/resource_management/TestUserResource.py
* (edit) 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py
* (edit) 
ambari-common/src/main/python/resource_management/core/providers/accounts.py
* (edit) ambari-common/src/main/python/resource_management/core/base.py
* (edit) 
ambari-server/src/test/python/stacks/2.0.6/hooks/before-ANY/test_before_any.py
* (edit) 
ambari-common/src/main/python/resource_management/core/resources/accounts.py


> Client component restart fails after Ambari upgrade while running custom hook 
> script on Suse 11
> ---
>
> Key: AMBARI-21531
> URL: https://issues.apache.org/jira/browse/AMBARI-21531
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21531.patch
>
>
> Seen in two cluster with Suse 11 SP4 OS
> **STR**
>   1. Deployed cluster with Ambari version: 2.4.2.0-136 and HDP version: 
> 2.5.3.0-37 (secure cluster, wire encryption enabled one cluster, disabled on 
> second cluster)
>   2. Upgrade Ambari to 2.5.2.0-147 (hash: 
> be3a875972224d7eb420c783a9f2cbdc7157)
>   3. Regenerate keytabs post upgrade and then try to restart all services
> **Result:**  
> Observed errors at start of Falcon, HBase, Atlas clients:
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 35, in 
> BeforeAnyHook().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 29, in hook
> setup_users()
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py",
>  line 51, in setup_users
> groups = params.user_to_groups_dict[user],
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/accounts.py",
>  line 82, in action_create
> shell.checked_call(command, sudo=True)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 72, in inner
> result = function(command, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 
> 'usermod -u 1002 -G hadoop,hadoop -g hadoop hive' returned 11. usermod: 
> `hadoop' is primary group name.
> usermod: `hadoop' is primary group name.
> usermod: UID 1002 is not unique.
> Error: Error: Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Error: Error: Unable to run 
> the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py',
>  'START', '/var/lib/ambari-agent/data/command-864.json', 
> 

[jira] [Updated] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21534:
--
Status: Patch Available  (was: Open)

> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21534.patch
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094529#comment-16094529
 ] 

Andrii Tkach commented on AMBARI-21534:
---

30389 passing (26s)
  157 pending


> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21534.patch
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21534:
--
Attachment: AMBARI-21534.patch

> Spinner doesnt disappear and hosts not loading even after 5 minutes
> ---
>
> Key: AMBARI-21534
> URL: https://issues.apache.org/jira/browse/AMBARI-21534
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.5.2
>
> Attachments: AMBARI-21534.patch
>
>
> There were no requests which failed and there were javascript errors when 
> trying to load the Hosts Page.
> {code}
> app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
> undefined
> at Class.map (app.js:59900)
> at Class.newFunc [as map] (vendor.js:2608)
> at app.js:188588
> map   @   app.js:59900
> newFunc   @   vendor.js:2608
> (anonymous)   @   app.js:188588
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21534) Spinner doesnt disappear and hosts not loading even after 5 minutes

2017-07-20 Thread Andrii Tkach (JIRA)
Andrii Tkach created AMBARI-21534:
-

 Summary: Spinner doesnt disappear and hosts not loading even after 
5 minutes
 Key: AMBARI-21534
 URL: https://issues.apache.org/jira/browse/AMBARI-21534
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 2.5.2
Reporter: Andrii Tkach
Assignee: Andrii Tkach
Priority: Critical
 Fix For: 2.5.2


There were no requests which failed and there were javascript errors when 
trying to load the Hosts Page.
{code}
app.js:59900 Uncaught TypeError: Cannot read property 'HostStackVersions' of 
undefined
at Class.map (app.js:59900)
at Class.newFunc [as map] (vendor.js:2608)
at app.js:188588
map @   app.js:59900
newFunc @   vendor.js:2608
(anonymous) @   app.js:188588
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21516) Log Search docker test environment build front/backend only

2017-07-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094526#comment-16094526
 ] 

Hudson commented on AMBARI-21516:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #7789 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/7789/])
AMBARI-21516 Log Search docker test environment build front/backend only 
(mgergely: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=587c42d79da4b384ab18d7078c6d045a807a7bb5])
* (edit) ambari-logsearch/docker/logsearch-docker.sh


> Log Search docker test environment build front/backend only
> ---
>
> Key: AMBARI-21516
> URL: https://issues.apache.org/jira/browse/AMBARI-21516
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 3.0.0
>
> Attachments: AMBARI-21516.patch
>
>
> Building the logsearch project for testing on docker takes some time, while 
> often only the backend / frontend was modified. It should be configurable 
> what to build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21531) Client component restart fails after Ambari upgrade while running custom hook script on Suse 11

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094522#comment-16094522
 ] 

Hadoop QA commented on AMBARI-21531:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12878141/AMBARI-21531.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-agent ambari-server.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11826//console

This message is automatically generated.

> Client component restart fails after Ambari upgrade while running custom hook 
> script on Suse 11
> ---
>
> Key: AMBARI-21531
> URL: https://issues.apache.org/jira/browse/AMBARI-21531
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21531.patch
>
>
> Seen in two cluster with Suse 11 SP4 OS
> **STR**
>   1. Deployed cluster with Ambari version: 2.4.2.0-136 and HDP version: 
> 2.5.3.0-37 (secure cluster, wire encryption enabled one cluster, disabled on 
> second cluster)
>   2. Upgrade Ambari to 2.5.2.0-147 (hash: 
> be3a875972224d7eb420c783a9f2cbdc7157)
>   3. Regenerate keytabs post upgrade and then try to restart all services
> **Result:**  
> Observed errors at start of Falcon, HBase, Atlas clients:
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 35, in 
> BeforeAnyHook().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 29, in hook
> setup_users()
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py",
>  line 51, in setup_users
> groups = params.user_to_groups_dict[user],
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/accounts.py",
>  line 82, in action_create
> shell.checked_call(command, sudo=True)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 72, in inner
> result = function(command, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 
> 'usermod -u 1002 -G hadoop,hadoop -g hadoop hive' returned 11. usermod: 
> `hadoop' is primary group name.
> usermod: `hadoop' is primary group name.
> usermod: UID 1002 is not unique.
> Error: Error: Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Error: Error: Unable to run 
> the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py',
>  'START', '/var/lib/ambari-agent/data/command-864.json', 
> 

[jira] [Commented] (AMBARI-21533) Text change for bypassing prechecks

2017-07-20 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094517#comment-16094517
 ] 

Andrii Tkach commented on AMBARI-21533:
---

committed to branch-2.5

> Text change for bypassing prechecks
> ---
>
> Key: AMBARI-21533
> URL: https://issues.apache.org/jira/browse/AMBARI-21533
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 2.5.2
>
> Attachments: AMBARI-21533.patch
>
>
> make the following change:
> {quote}
> "Bypassed errors, proceed at your own risk" to "Upgrade Checks Bypassed"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21533) Text change for bypassing prechecks

2017-07-20 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21533:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Text change for bypassing prechecks
> ---
>
> Key: AMBARI-21533
> URL: https://issues.apache.org/jira/browse/AMBARI-21533
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 2.5.2
>
> Attachments: AMBARI-21533.patch
>
>
> make the following change:
> {quote}
> "Bypassed errors, proceed at your own risk" to "Upgrade Checks Bypassed"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21533) Text change for bypassing prechecks

2017-07-20 Thread Aleksandr Kovalenko (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094512#comment-16094512
 ] 

Aleksandr Kovalenko commented on AMBARI-21533:
--

+1 for the patch

> Text change for bypassing prechecks
> ---
>
> Key: AMBARI-21533
> URL: https://issues.apache.org/jira/browse/AMBARI-21533
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 2.5.2
>
> Attachments: AMBARI-21533.patch
>
>
> make the following change:
> {quote}
> "Bypassed errors, proceed at your own risk" to "Upgrade Checks Bypassed"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21533) Text change for bypassing prechecks

2017-07-20 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094510#comment-16094510
 ] 

Andrii Tkach commented on AMBARI-21533:
---

30389 passing (24s)
  157 pending

> Text change for bypassing prechecks
> ---
>
> Key: AMBARI-21533
> URL: https://issues.apache.org/jira/browse/AMBARI-21533
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 2.5.2
>
> Attachments: AMBARI-21533.patch
>
>
> make the following change:
> {quote}
> "Bypassed errors, proceed at your own risk" to "Upgrade Checks Bypassed"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21533) Text change for bypassing prechecks

2017-07-20 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21533:
--
Status: Patch Available  (was: Open)

> Text change for bypassing prechecks
> ---
>
> Key: AMBARI-21533
> URL: https://issues.apache.org/jira/browse/AMBARI-21533
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 2.5.2
>
> Attachments: AMBARI-21533.patch
>
>
> make the following change:
> {quote}
> "Bypassed errors, proceed at your own risk" to "Upgrade Checks Bypassed"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21533) Text change for bypassing prechecks

2017-07-20 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-21533:
--
Attachment: AMBARI-21533.patch

> Text change for bypassing prechecks
> ---
>
> Key: AMBARI-21533
> URL: https://issues.apache.org/jira/browse/AMBARI-21533
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 2.5.2
>
> Attachments: AMBARI-21533.patch
>
>
> make the following change:
> {quote}
> "Bypassed errors, proceed at your own risk" to "Upgrade Checks Bypassed"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21533) Text change for bypassing prechecks

2017-07-20 Thread Andrii Tkach (JIRA)
Andrii Tkach created AMBARI-21533:
-

 Summary: Text change for bypassing prechecks
 Key: AMBARI-21533
 URL: https://issues.apache.org/jira/browse/AMBARI-21533
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 2.5.2
Reporter: Andrii Tkach
Assignee: Andrii Tkach
 Fix For: 2.5.2


make the following change:

{quote}
"Bypassed errors, proceed at your own risk" to "Upgrade Checks Bypassed"
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21532) Namenode restart - PID file delete happens before the call to check status

2017-07-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-21532:

Fix Version/s: 2.5.3

> Namenode restart - PID file delete happens before the call to check status
> --
>
> Key: AMBARI-21532
> URL: https://issues.apache.org/jira/browse/AMBARI-21532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Fix For: 2.5.3
>
> Attachments: AMBARI-21532.patch
>
>
> PID file delete happens before the call to check status.
> {code}
> ...
> 2017-07-06 00:03:21,004 - 
> File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete']}
> 2017-07-06 00:05:21,103 - Waiting for actual component stop
> 2017-07-06 00:05:21,104 - Pid file 
> /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid is empty or does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21498) DB consistency checker throws errors for missing 'product-info' configs after Ambari upgrade

2017-07-20 Thread Dmitry Lysnichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094498#comment-16094498
 ] 

Dmitry Lysnichenko commented on AMBARI-21498:
-

Committed to trunk

To https://git-wip-us.apache.org/repos/asf/ambari.git
   8c15965e09..d999343f97  trunk -> trunk

Waiting for 2.5.2 branch off

> DB consistency checker throws errors for missing 'product-info' configs after 
> Ambari upgrade
> 
>
> Key: AMBARI-21498
> URL: https://issues.apache.org/jira/browse/AMBARI-21498
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.5.3
>
> Attachments: AMBARI-21498.patch
>
>
> DB consistency checker throws errors for missing 'product-info' configs after 
> Ambari upgrade
> AMBARI-21364 fixed the missing 'parquet-logging' but 'product-info' is still 
> missing for Smartsense
> STR
> Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 2.6.1.0-129
> Upgrade Ambari to 2.5.2.0-105
> Run "ambari-server start"
> Live openstack cluster : 172.22.124.150 (Ambari Server)
> ambari-server-check-database.log
> {code}
> 2017-07-07 18:09:05,678 ERROR - Required config(s): product-info is(are) not 
> available for service SMARTSENSE with service config version 1 in cluster cl1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21532) Namenode restart - PID file delete happens before the call to check status

2017-07-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-21532:

Component/s: ambari-server

> Namenode restart - PID file delete happens before the call to check status
> --
>
> Key: AMBARI-21532
> URL: https://issues.apache.org/jira/browse/AMBARI-21532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Attachments: AMBARI-21532.patch
>
>
> PID file delete happens before the call to check status.
> {code}
> ...
> 2017-07-06 00:03:21,004 - 
> File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete']}
> 2017-07-06 00:05:21,103 - Waiting for actual component stop
> 2017-07-06 00:05:21,104 - Pid file 
> /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid is empty or does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21532) Namenode restart - PID file delete happens before the call to check status

2017-07-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-21532:

Attachment: AMBARI-21532.patch

> Namenode restart - PID file delete happens before the call to check status
> --
>
> Key: AMBARI-21532
> URL: https://issues.apache.org/jira/browse/AMBARI-21532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Attachments: AMBARI-21532.patch
>
>
> PID file delete happens before the call to check status.
> {code}
> ...
> 2017-07-06 00:03:21,004 - 
> File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete']}
> 2017-07-06 00:05:21,103 - Waiting for actual component stop
> 2017-07-06 00:05:21,104 - Pid file 
> /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid is empty or does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21532) Namenode restart - PID file delete happens before the call to check status

2017-07-20 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-21532:
---

 Summary: Namenode restart - PID file delete happens before the 
call to check status
 Key: AMBARI-21532
 URL: https://issues.apache.org/jira/browse/AMBARI-21532
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Critical



PID file delete happens before the call to check status.

{code}
...
2017-07-06 00:03:21,004 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] 
{'action': ['delete']}
2017-07-06 00:05:21,103 - Waiting for actual component stop
2017-07-06 00:05:21,104 - Pid file 
/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid is empty or does not exist
{code}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21532) Namenode restart - PID file delete happens before the call to check status

2017-07-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-21532:

Status: Patch Available  (was: Open)

> Namenode restart - PID file delete happens before the call to check status
> --
>
> Key: AMBARI-21532
> URL: https://issues.apache.org/jira/browse/AMBARI-21532
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Attachments: AMBARI-21532.patch
>
>
> PID file delete happens before the call to check status.
> {code}
> ...
> 2017-07-06 00:03:21,004 - 
> File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete']}
> 2017-07-06 00:05:21,103 - Waiting for actual component stop
> 2017-07-06 00:05:21,104 - Pid file 
> /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid is empty or does not exist
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21531) Client component restart fails after Ambari upgrade while running custom hook script on Suse 11

2017-07-20 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21531:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.5

> Client component restart fails after Ambari upgrade while running custom hook 
> script on Suse 11
> ---
>
> Key: AMBARI-21531
> URL: https://issues.apache.org/jira/browse/AMBARI-21531
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21531.patch
>
>
> Seen in two cluster with Suse 11 SP4 OS
> **STR**
>   1. Deployed cluster with Ambari version: 2.4.2.0-136 and HDP version: 
> 2.5.3.0-37 (secure cluster, wire encryption enabled one cluster, disabled on 
> second cluster)
>   2. Upgrade Ambari to 2.5.2.0-147 (hash: 
> be3a875972224d7eb420c783a9f2cbdc7157)
>   3. Regenerate keytabs post upgrade and then try to restart all services
> **Result:**  
> Observed errors at start of Falcon, HBase, Atlas clients:
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 35, in 
> BeforeAnyHook().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 29, in hook
> setup_users()
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py",
>  line 51, in setup_users
> groups = params.user_to_groups_dict[user],
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/accounts.py",
>  line 82, in action_create
> shell.checked_call(command, sudo=True)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 72, in inner
> result = function(command, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 
> 'usermod -u 1002 -G hadoop,hadoop -g hadoop hive' returned 11. usermod: 
> `hadoop' is primary group name.
> usermod: `hadoop' is primary group name.
> usermod: UID 1002 is not unique.
> Error: Error: Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Error: Error: Unable to run 
> the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py',
>  'START', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']
> 
> Suspect something to do with TLS v1 protocol on Suse 11.4
> Cluster:  (alive for 48h)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21531) Client component restart fails after Ambari upgrade while running custom hook script on Suse 11

2017-07-20 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21531:
-
Status: Patch Available  (was: Open)

> Client component restart fails after Ambari upgrade while running custom hook 
> script on Suse 11
> ---
>
> Key: AMBARI-21531
> URL: https://issues.apache.org/jira/browse/AMBARI-21531
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21531.patch
>
>
> Seen in two cluster with Suse 11 SP4 OS
> **STR**
>   1. Deployed cluster with Ambari version: 2.4.2.0-136 and HDP version: 
> 2.5.3.0-37 (secure cluster, wire encryption enabled one cluster, disabled on 
> second cluster)
>   2. Upgrade Ambari to 2.5.2.0-147 (hash: 
> be3a875972224d7eb420c783a9f2cbdc7157)
>   3. Regenerate keytabs post upgrade and then try to restart all services
> **Result:**  
> Observed errors at start of Falcon, HBase, Atlas clients:
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 35, in 
> BeforeAnyHook().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 29, in hook
> setup_users()
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py",
>  line 51, in setup_users
> groups = params.user_to_groups_dict[user],
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/accounts.py",
>  line 82, in action_create
> shell.checked_call(command, sudo=True)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 72, in inner
> result = function(command, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 
> 'usermod -u 1002 -G hadoop,hadoop -g hadoop hive' returned 11. usermod: 
> `hadoop' is primary group name.
> usermod: `hadoop' is primary group name.
> usermod: UID 1002 is not unique.
> Error: Error: Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Error: Error: Unable to run 
> the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py',
>  'START', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']
> 
> Suspect something to do with TLS v1 protocol on Suse 11.4
> Cluster:  (alive for 48h)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21531) Client component restart fails after Ambari upgrade while running custom hook script on Suse 11

2017-07-20 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-21531:
-
Attachment: AMBARI-21531.patch

> Client component restart fails after Ambari upgrade while running custom hook 
> script on Suse 11
> ---
>
> Key: AMBARI-21531
> URL: https://issues.apache.org/jira/browse/AMBARI-21531
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.2
>
> Attachments: AMBARI-21531.patch
>
>
> Seen in two cluster with Suse 11 SP4 OS
> **STR**
>   1. Deployed cluster with Ambari version: 2.4.2.0-136 and HDP version: 
> 2.5.3.0-37 (secure cluster, wire encryption enabled one cluster, disabled on 
> second cluster)
>   2. Upgrade Ambari to 2.5.2.0-147 (hash: 
> be3a875972224d7eb420c783a9f2cbdc7157)
>   3. Regenerate keytabs post upgrade and then try to restart all services
> **Result:**  
> Observed errors at start of Falcon, HBase, Atlas clients:
> 
> 
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 35, in 
> BeforeAnyHook().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
>  line 29, in hook
> setup_users()
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py",
>  line 51, in setup_users
> groups = params.user_to_groups_dict[user],
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/accounts.py",
>  line 82, in action_create
> shell.checked_call(command, sudo=True)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 72, in inner
> result = function(command, **kwargs)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 102, in checked_call
> tries=tries, try_sleep=try_sleep, 
> timeout_kill_strategy=timeout_kill_strategy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
> 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 
> 'usermod -u 1002 -G hadoop,hadoop -g hadoop hive' returned 11. usermod: 
> `hadoop' is primary group name.
> usermod: `hadoop' is primary group name.
> usermod: UID 1002 is not unique.
> Error: Error: Unable to run the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
>  'ANY', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Error: Error: Unable to run 
> the custom hook script ['/usr/bin/python', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py',
>  'START', '/var/lib/ambari-agent/data/command-864.json', 
> '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START', 
> '/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
> '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']
> 
> Suspect something to do with TLS v1 protocol on Suse 11.4
> Cluster:  (alive for 48h)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21531) Client component restart fails after Ambari upgrade while running custom hook script on Suse 11

2017-07-20 Thread Andrew Onischuk (JIRA)
Andrew Onischuk created AMBARI-21531:


 Summary: Client component restart fails after Ambari upgrade while 
running custom hook script on Suse 11
 Key: AMBARI-21531
 URL: https://issues.apache.org/jira/browse/AMBARI-21531
 Project: Ambari
  Issue Type: Bug
Reporter: Andrew Onischuk
Assignee: Andrew Onischuk
 Fix For: 2.5.2


Seen in two cluster with Suse 11 SP4 OS

**STR**

  1. Deployed cluster with Ambari version: 2.4.2.0-136 and HDP version: 
2.5.3.0-37 (secure cluster, wire encryption enabled one cluster, disabled on 
second cluster)
  2. Upgrade Ambari to 2.5.2.0-147 (hash: 
be3a875972224d7eb420c783a9f2cbdc7157)
  3. Regenerate keytabs post upgrade and then try to restart all services

**Result:**  
Observed errors at start of Falcon, HBase, Atlas clients:




Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
 line 35, in 
BeforeAnyHook().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 329, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py",
 line 29, in hook
setup_users()
  File 
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py",
 line 51, in setup_users
groups = params.user_to_groups_dict[user],
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/accounts.py",
 line 82, in action_create
shell.checked_call(command, sudo=True)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, 
in inner
result = function(command, **kwargs)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, 
in checked_call
tries=tries, try_sleep=try_sleep, 
timeout_kill_strategy=timeout_kill_strategy)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, 
in _call_wrapper
result = _call(command, **kwargs_copy)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, 
in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'usermod 
-u 1002 -G hadoop,hadoop -g hadoop hive' returned 11. usermod: `hadoop' is 
primary group name.
usermod: `hadoop' is primary group name.
usermod: UID 1002 is not unique.
Error: Error: Unable to run the custom hook script ['/usr/bin/python', 
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py',
 'ANY', '/var/lib/ambari-agent/data/command-864.json', 
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', 
'/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
'/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Error: Error: Unable to run 
the custom hook script ['/usr/bin/python', 
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py',
 'START', '/var/lib/ambari-agent/data/command-864.json', 
'/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START', 
'/var/lib/ambari-agent/data/structured-out-864.json', 'INFO', 
'/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']


Suspect something to do with TLS v1 protocol on Suse 11.4

Cluster:  (alive for 48h)





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21169) Service and Patch Upgrade Catalog Changes

2017-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094480#comment-16094480
 ] 

Hadoop QA commented on AMBARI-21169:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12878085/AMBARI-21530.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/11825//console

This message is automatically generated.

> Service and Patch Upgrade Catalog Changes
> -
>
> Key: AMBARI-21169
> URL: https://issues.apache.org/jira/browse/AMBARI-21169
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-upgrade
>Affects Versions: 3.0.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: AMBARI-21530.patch
>
>
> Implement the following upgrade catalog changes related to service/patch 
> upgrades:
> h5. {{servicecomponentdesiredstate}}
> - Remove:  desired_stack_id BIGINT NOT NULL
> - Remove: desired_version VARCHAR(255) NOT NULL DEFAULT 'UNKNOWN'
> - Remove: FK on desired_stack_id (FK_scds_desired_stack_id)
> - Add: desired_repo_version_id BIGINT NOT NULL
> - Add: FK to repo_version_id (FK_scds_desired_repo_id)
> h5. {{hostcomponentdesiredstate}}
> - Remove: desired_stack_id BIGINT NOT NULL
> - Remove: FK on desired_stack_id (FK_hcds_desired_stack_id)
> h5. {{hostcomponentstate}}
> - Remove: current_stack_id BIGINT NOT NULL
> - Remove: FK on desired_stack_id (FK_hcs_current_stack_id)
> h5. {{servicedesiredstate}}
> - Remove: desired_stack_id BIGINT NOT NULL
> - Add: desired_repo_version_id BIGINT NOT NULL
> - Add: FK  to repo_version_id (FK_repo_version_id)
> h5. {{host_version}}
> - Change the {{UNIQUE}} constraint to allow for multiple {{CURRENT}} 
> repositories per host. Restriction should also include the 
> {{repo_version_id}} for uniqueness now.
> h5. {{cluster_version}}
> - This table was removed.
> h5. {{servicecomponent_version}}
> - Create this table and populate with data
> h5. {{upgrade}}
> - Add orchestration VARCHAR(255) NOT NULL DEFAULT 'STANDARD'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-21516) Log Search docker test environment build front/backend only

2017-07-20 Thread Miklos Gergely (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094478#comment-16094478
 ] 

Miklos Gergely commented on AMBARI-21516:
-

committed to trunk:
{code:java}
commit 587c42d79da4b384ab18d7078c6d045a807a7bb5
Author: Miklos Gergely 
Date:   Thu Jul 20 12:16:31 2017 +0200

AMBARI-21516 Log Search docker test environment build front/backend only 
(mgergely)

Change-Id: I30d9d9a2c38ceeea653f7dda2c51493bd2df7ae0
{code}

> Log Search docker test environment build front/backend only
> ---
>
> Key: AMBARI-21516
> URL: https://issues.apache.org/jira/browse/AMBARI-21516
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 3.0.0
>
> Attachments: AMBARI-21516.patch
>
>
> Building the logsearch project for testing on docker takes some time, while 
> often only the backend / frontend was modified. It should be configurable 
> what to build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-21516) Log Search docker test environment build front/backend only

2017-07-20 Thread Miklos Gergely (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely resolved AMBARI-21516.
-
Resolution: Fixed

> Log Search docker test environment build front/backend only
> ---
>
> Key: AMBARI-21516
> URL: https://issues.apache.org/jira/browse/AMBARI-21516
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 3.0.0
>
> Attachments: AMBARI-21516.patch
>
>
> Building the logsearch project for testing on docker takes some time, while 
> often only the backend / frontend was modified. It should be configurable 
> what to build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-9510) Metric Monitor can not be started

2017-07-20 Thread Dmytro Sen (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16094459#comment-16094459
 ] 

Dmytro Sen commented on AMBARI-9510:


[~WanderingEachDay]

There is no ld util installed on your system, install binutils package. Two 
version of python should be ok.

Please use user mailing list for discussion, not jira. 

Thanks.

> Metric Monitor can not be started
> -
>
> Key: AMBARI-9510
> URL: https://issues.apache.org/jira/browse/AMBARI-9510
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-metrics, ambari-server
>Affects Versions: 2.0.0
>Reporter: Dmytro Sen
>Assignee: Dmytro Sen
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: AMBARI-9510.patch
>
>
> After deploying Metric Monitor can not be started.
> /var/log/ambari-metrics-monitor/ambari-metrics-monitor.out contains:
> psutil binaries need to be built by running, psutil/build.py manually or by 
> running a, mvn clean package, command.
> {noformat}
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/resource_monitoring/main.py", line 
> 27, in 
> from core.controller import Controller
>   File 
> "/usr/lib/python2.6/site-packages/resource_monitoring/core/controller.py", 
> line 28, in 
> from metric_collector import MetricsCollector
>   File 
> "/usr/lib/python2.6/site-packages/resource_monitoring/core/metric_collector.py",
>  line 23, in 
> from host_info import HostInfo
>   File 
> "/usr/lib/python2.6/site-packages/resource_monitoring/core/host_info.py", 
> line 22, in 
> import psutil
>   File 
> "/usr/lib/python2.6/site-packages/resource_monitoring/psutil/build/lib.linux-x86_64-2.6/psutil/__init__.py",
>  line 89, in 
> import psutil._pslinux as _psplatform
>   File 
> "/usr/lib/python2.6/site-packages/resource_monitoring/psutil/build/lib.linux-x86_64-2.6/psutil/_pslinux.py",
>  line 20, in 
> from psutil import _common
> ImportError: cannot import name _common
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21516) Log Search docker test environment build front/backend only

2017-07-20 Thread Miklos Gergely (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated AMBARI-21516:

Attachment: AMBARI-21516.patch

> Log Search docker test environment build front/backend only
> ---
>
> Key: AMBARI-21516
> URL: https://issues.apache.org/jira/browse/AMBARI-21516
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-logsearch
>Affects Versions: 3.0.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 3.0.0
>
> Attachments: AMBARI-21516.patch
>
>
> Building the logsearch project for testing on docker takes some time, while 
> often only the backend / frontend was modified. It should be configurable 
> what to build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >