[jira] [Updated] (AMBARI-22060) Fail to restart Ranger Admin during HDP downgrade.

2017-09-26 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22060:

Attachment: AMBARI-22060.patch

> Fail to restart Ranger Admin  during HDP downgrade.
> ---
>
> Key: AMBARI-22060
> URL: https://issues.apache.org/jira/browse/AMBARI-22060
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Attachments: AMBARI-22060.patch
>
>
> During the downgrade process, run into the following error whilst it's 
> restating Ranger Admin:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
>  line 216, in
> RangerAdmin().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 850, in restart
> self.start(env, upgrade_type=upgrade_type)
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
>  line 93, in start
> setup_ranger_audit_solr()
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py",
>  line 705, in setup_ranger_audit_solr
> new_service_principals = [params.ranger_admin_jaas_principal])
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
>  line 329, in add_solr_roles
> new_service_users.append(__remove_host_from_principal(new_service_user, 
> kerberos_realm))
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
>  line 266, in __remove_host_from_principal
> if not realm:
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
>  line 73, in __getattr__
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'kerberos-env' was not found in configurations dictionary!
> {code}
> The reason was that server did not have many configs selected, and did not 
> send them to agent during downgrade. There are few issues here:
> - During upgrade from 2.4 to 2.5, finalize did not update current cluster 
> version. As a result config helpers have gone mad
> - As a result of previous issue, some Configure tasks failed to execute
> - During downgrade from 2.6 , looks like cluster entity DB state was not 
> consistent after config selection, so sometimes configs were not selected is 
> some cases. I managed to reproduce that only once,
> it's a race condition that is very hard to catch/trace in debugger.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22060) Fail to restart Ranger Admin during HDP downgrade.

2017-09-26 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22060:

Status: Patch Available  (was: Open)

> Fail to restart Ranger Admin  during HDP downgrade.
> ---
>
> Key: AMBARI-22060
> URL: https://issues.apache.org/jira/browse/AMBARI-22060
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Attachments: AMBARI-22060.patch
>
>
> During the downgrade process, run into the following error whilst it's 
> restating Ranger Admin:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
>  line 216, in
> RangerAdmin().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 850, in restart
> self.start(env, upgrade_type=upgrade_type)
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
>  line 93, in start
> setup_ranger_audit_solr()
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py",
>  line 705, in setup_ranger_audit_solr
> new_service_principals = [params.ranger_admin_jaas_principal])
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
>  line 329, in add_solr_roles
> new_service_users.append(__remove_host_from_principal(new_service_user, 
> kerberos_realm))
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
>  line 266, in __remove_host_from_principal
> if not realm:
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
>  line 73, in __getattr__
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'kerberos-env' was not found in configurations dictionary!
> {code}
> The reason was that server did not have many configs selected, and did not 
> send them to agent during downgrade. There are few issues here:
> - During upgrade from 2.4 to 2.5, finalize did not update current cluster 
> version. As a result config helpers have gone mad
> - As a result of previous issue, some Configure tasks failed to execute
> - During downgrade from 2.6 , looks like cluster entity DB state was not 
> consistent after config selection, so sometimes configs were not selected is 
> some cases. I managed to reproduce that only once,
> it's a race condition that is very hard to catch/trace in debugger.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22060) Fail to restart Ranger Admin during HDP downgrade.

2017-09-26 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22060:

Fix Version/s: 2.6.0

> Fail to restart Ranger Admin  during HDP downgrade.
> ---
>
> Key: AMBARI-22060
> URL: https://issues.apache.org/jira/browse/AMBARI-22060
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: AMBARI-22060.patch
>
>
> During the downgrade process, run into the following error whilst it's 
> restating Ranger Admin:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
>  line 216, in
> RangerAdmin().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 850, in restart
> self.start(env, upgrade_type=upgrade_type)
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
>  line 93, in start
> setup_ranger_audit_solr()
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py",
>  line 705, in setup_ranger_audit_solr
> new_service_principals = [params.ranger_admin_jaas_principal])
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
>  line 329, in add_solr_roles
> new_service_users.append(__remove_host_from_principal(new_service_user, 
> kerberos_realm))
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
>  line 266, in __remove_host_from_principal
> if not realm:
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
>  line 73, in __getattr__
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'kerberos-env' was not found in configurations dictionary!
> {code}
> The reason was that server did not have many configs selected, and did not 
> send them to agent during downgrade. There are few issues here:
> - During upgrade from 2.4 to 2.5, finalize did not update current cluster 
> version. As a result config helpers have gone mad
> - As a result of previous issue, some Configure tasks failed to execute
> - During downgrade from 2.6 , looks like cluster entity DB state was not 
> consistent after config selection, so sometimes configs were not selected is 
> some cases. I managed to reproduce that only once,
> it's a race condition that is very hard to catch/trace in debugger.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22060) Fail to restart Ranger Admin during HDP downgrade.

2017-09-26 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22060:

Affects Version/s: 2.5.1

> Fail to restart Ranger Admin  during HDP downgrade.
> ---
>
> Key: AMBARI-22060
> URL: https://issues.apache.org/jira/browse/AMBARI-22060
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: AMBARI-22060.patch
>
>
> During the downgrade process, run into the following error whilst it's 
> restating Ranger Admin:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
>  line 216, in
> RangerAdmin().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 329, in execute
> method(env)
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 850, in restart
> self.start(env, upgrade_type=upgrade_type)
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
>  line 93, in start
> setup_ranger_audit_solr()
> File 
> "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py",
>  line 705, in setup_ranger_audit_solr
> new_service_principals = [params.ranger_admin_jaas_principal])
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
>  line 329, in add_solr_roles
> new_service_users.append(__remove_host_from_principal(new_service_user, 
> kerberos_realm))
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
>  line 266, in __remove_host_from_principal
> if not realm:
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
>  line 73, in __getattr__
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'kerberos-env' was not found in configurations dictionary!
> {code}
> The reason was that server did not have many configs selected, and did not 
> send them to agent during downgrade. There are few issues here:
> - During upgrade from 2.4 to 2.5, finalize did not update current cluster 
> version. As a result config helpers have gone mad
> - As a result of previous issue, some Configure tasks failed to execute
> - During downgrade from 2.6 , looks like cluster entity DB state was not 
> consistent after config selection, so sometimes configs were not selected is 
> some cases. I managed to reproduce that only once,
> it's a race condition that is very hard to catch/trace in debugger.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22060) Fail to restart Ranger Admin during HDP downgrade.

2017-09-26 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22060:
---

 Summary: Fail to restart Ranger Admin  during HDP downgrade.
 Key: AMBARI-22060
 URL: https://issues.apache.org/jira/browse/AMBARI-22060
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Critical



During the downgrade process, run into the following error whilst it's 
restating Ranger Admin:

{code}
Traceback (most recent call last):
File 
"/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
 line 216, in
RangerAdmin().execute()
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 329, in execute
method(env)
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 850, in restart
self.start(env, upgrade_type=upgrade_type)
File 
"/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py",
 line 93, in start
setup_ranger_audit_solr()
File 
"/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py",
 line 705, in setup_ranger_audit_solr
new_service_principals = [params.ranger_admin_jaas_principal])
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
 line 329, in add_solr_roles
new_service_users.append(__remove_host_from_principal(new_service_user, 
kerberos_realm))
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py",
 line 266, in __remove_host_from_principal
if not realm:
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
 line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in 
configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 
'kerberos-env' was not found in configurations dictionary!

{code}

The reason was that server did not have many configs selected, and did not send 
them to agent during downgrade. There are few issues here:
- During upgrade from 2.4 to 2.5, finalize did not update current cluster 
version. As a result config helpers have gone mad
- As a result of previous issue, some Configure tasks failed to execute
- During downgrade from 2.6 , looks like cluster entity DB state was not 
consistent after config selection, so sometimes configs were not selected is 
some cases. I managed to reproduce that only once,
it's a race condition that is very hard to catch/trace in debugger.






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22104) Refactor existing server side actions to use the common AbstractUpgradeServerAction

2017-10-01 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22104:

Attachment: AMBARI-22104.patch

> Refactor existing server side actions to use the common 
> AbstractUpgradeServerAction
> ---
>
> Key: AMBARI-22104
> URL: https://issues.apache.org/jira/browse/AMBARI-22104
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-22104.patch
>
>
> Other server-side classes need to use the abstract class in the summary. 
> Identify fields that are largely common with the server-side actions and put 
> them in the abstract class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22104) Refactor existing server side actions to use the common AbstractUpgradeServerAction

2017-10-01 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22104:
---

 Summary: Refactor existing server side actions to use the common 
AbstractUpgradeServerAction
 Key: AMBARI-22104
 URL: https://issues.apache.org/jira/browse/AMBARI-22104
 Project: Ambari
  Issue Type: Task
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko



Other server-side classes need to use the abstract class in the summary. 
Identify fields that are largely common with the server-side actions and put 
them in the abstract class.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22104) Refactor existing server side actions to use the common AbstractUpgradeServerAction

2017-10-01 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22104:

Component/s: ambari-server

> Refactor existing server side actions to use the common 
> AbstractUpgradeServerAction
> ---
>
> Key: AMBARI-22104
> URL: https://issues.apache.org/jira/browse/AMBARI-22104
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>
> Other server-side classes need to use the abstract class in the summary. 
> Identify fields that are largely common with the server-side actions and put 
> them in the abstract class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22104) Refactor existing server side actions to use the common AbstractUpgradeServerAction

2017-10-01 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22104:

Status: Patch Available  (was: Open)

> Refactor existing server side actions to use the common 
> AbstractUpgradeServerAction
> ---
>
> Key: AMBARI-22104
> URL: https://issues.apache.org/jira/browse/AMBARI-22104
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-22104.patch
>
>
> Other server-side classes need to use the abstract class in the summary. 
> Identify fields that are largely common with the server-side actions and put 
> them in the abstract class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22104) Refactor existing server side actions to use the common AbstractUpgradeServerAction

2017-10-03 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22104:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed
To https://git-wip-us.apache.org/repos/asf/ambari.git
   1f00c19d09..1032bc5d38  trunk -> trunk


> Refactor existing server side actions to use the common 
> AbstractUpgradeServerAction
> ---
>
> Key: AMBARI-22104
> URL: https://issues.apache.org/jira/browse/AMBARI-22104
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-22104.patch
>
>
> Other server-side classes need to use the abstract class in the summary. 
> Identify fields that are largely common with the server-side actions and put 
> them in the abstract class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-21832) Reject PATCH VDFs with Services that are not Included in the Cluster

2017-08-28 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-21832:
---

 Summary: Reject PATCH VDFs with Services that are not Included in 
the Cluster
 Key: AMBARI-21832
 URL: https://issues.apache.org/jira/browse/AMBARI-21832
 Project: Ambari
  Issue Type: Task
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Critical



Currently there is an odd scenario which can occur when patch repositories are 
registered which have services not yet installed. Consider the following 
scenario:

- Install ZooKeeper, Storm on HDP 2.6.0.0-1234
- Register/patch a {{PATCH}} VDF for Storm and Accumulo for 2.6.0.1-
- Install Accumulo

Which version does Accumulo use - the {{STANDARD}} repository or the {{PATCH}}? 
If the {{PATCH}} repository is chosen, this will now prevent reversion of the 
patch since there's no prior version for Accumulo to revert back to.

If Accumulo uses the {{STANDARD}} repo, then there needs to be a lot of design 
and UX flow work provided to indicate that a {{PATCH}} which was previously 
applied can be re-applied for the new service. This also causes problems for 
patch reversion since now there would be two upgrades which need to be reverted 
to "get rid" of the patch.

For the timeframe for Ambari 2.6, we should reject VDFs that include services 
which are not installed. This will prevent the problem.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21832) Reject PATCH VDFs with Services that are not Included in the Cluster

2017-08-28 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-21832:

Component/s: ambari-server

> Reject PATCH VDFs with Services that are not Included in the Cluster
> 
>
> Key: AMBARI-21832
> URL: https://issues.apache.org/jira/browse/AMBARI-21832
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Attachments: AMBARI-21832.patch
>
>
> Currently there is an odd scenario which can occur when patch repositories 
> are registered which have services not yet installed. Consider the following 
> scenario:
> - Install ZooKeeper, Storm on HDP 2.6.0.0-1234
> - Register/patch a {{PATCH}} VDF for Storm and Accumulo for 2.6.0.1-
> - Install Accumulo
> Which version does Accumulo use - the {{STANDARD}} repository or the 
> {{PATCH}}? If the {{PATCH}} repository is chosen, this will now prevent 
> reversion of the patch since there's no prior version for Accumulo to revert 
> back to.
> If Accumulo uses the {{STANDARD}} repo, then there needs to be a lot of 
> design and UX flow work provided to indicate that a {{PATCH}} which was 
> previously applied can be re-applied for the new service. This also causes 
> problems for patch reversion since now there would be two upgrades which need 
> to be reverted to "get rid" of the patch.
> For the timeframe for Ambari 2.6, we should reject VDFs that include services 
> which are not installed. This will prevent the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21832) Reject PATCH VDFs with Services that are not Included in the Cluster

2017-08-28 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-21832:

Status: Patch Available  (was: Open)

> Reject PATCH VDFs with Services that are not Included in the Cluster
> 
>
> Key: AMBARI-21832
> URL: https://issues.apache.org/jira/browse/AMBARI-21832
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Attachments: AMBARI-21832.patch
>
>
> Currently there is an odd scenario which can occur when patch repositories 
> are registered which have services not yet installed. Consider the following 
> scenario:
> - Install ZooKeeper, Storm on HDP 2.6.0.0-1234
> - Register/patch a {{PATCH}} VDF for Storm and Accumulo for 2.6.0.1-
> - Install Accumulo
> Which version does Accumulo use - the {{STANDARD}} repository or the 
> {{PATCH}}? If the {{PATCH}} repository is chosen, this will now prevent 
> reversion of the patch since there's no prior version for Accumulo to revert 
> back to.
> If Accumulo uses the {{STANDARD}} repo, then there needs to be a lot of 
> design and UX flow work provided to indicate that a {{PATCH}} which was 
> previously applied can be re-applied for the new service. This also causes 
> problems for patch reversion since now there would be two upgrades which need 
> to be reverted to "get rid" of the patch.
> For the timeframe for Ambari 2.6, we should reject VDFs that include services 
> which are not installed. This will prevent the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21832) Reject PATCH VDFs with Services that are not Included in the Cluster

2017-08-28 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-21832:

Fix Version/s: 2.6.0

> Reject PATCH VDFs with Services that are not Included in the Cluster
> 
>
> Key: AMBARI-21832
> URL: https://issues.apache.org/jira/browse/AMBARI-21832
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: AMBARI-21832.patch
>
>
> Currently there is an odd scenario which can occur when patch repositories 
> are registered which have services not yet installed. Consider the following 
> scenario:
> - Install ZooKeeper, Storm on HDP 2.6.0.0-1234
> - Register/patch a {{PATCH}} VDF for Storm and Accumulo for 2.6.0.1-
> - Install Accumulo
> Which version does Accumulo use - the {{STANDARD}} repository or the 
> {{PATCH}}? If the {{PATCH}} repository is chosen, this will now prevent 
> reversion of the patch since there's no prior version for Accumulo to revert 
> back to.
> If Accumulo uses the {{STANDARD}} repo, then there needs to be a lot of 
> design and UX flow work provided to indicate that a {{PATCH}} which was 
> previously applied can be re-applied for the new service. This also causes 
> problems for patch reversion since now there would be two upgrades which need 
> to be reverted to "get rid" of the patch.
> For the timeframe for Ambari 2.6, we should reject VDFs that include services 
> which are not installed. This will prevent the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-21832) Reject PATCH VDFs with Services that are not Included in the Cluster

2017-09-04 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-21832:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed
To https://git-wip-us.apache.org/repos/asf/ambari.git
   6174afff1e..9e4324fcfc  branch-2.6 -> branch-2.6
   c51540dee8..c091ebe8af  trunk -> trunk

> Reject PATCH VDFs with Services that are not Included in the Cluster
> 
>
> Key: AMBARI-21832
> URL: https://issues.apache.org/jira/browse/AMBARI-21832
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: AMBARI-21832.patch
>
>
> Currently there is an odd scenario which can occur when patch repositories 
> are registered which have services not yet installed. Consider the following 
> scenario:
> - Install ZooKeeper, Storm on HDP 2.6.0.0-1234
> - Register/patch a {{PATCH}} VDF for Storm and Accumulo for 2.6.0.1-
> - Install Accumulo
> Which version does Accumulo use - the {{STANDARD}} repository or the 
> {{PATCH}}? If the {{PATCH}} repository is chosen, this will now prevent 
> reversion of the patch since there's no prior version for Accumulo to revert 
> back to.
> If Accumulo uses the {{STANDARD}} repo, then there needs to be a lot of 
> design and UX flow work provided to indicate that a {{PATCH}} which was 
> previously applied can be re-applied for the new service. This also causes 
> problems for patch reversion since now there would be two upgrades which need 
> to be reverted to "get rid" of the patch.
> For the timeframe for Ambari 2.6, we should reject VDFs that include services 
> which are not installed. This will prevent the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-21744) package_regex in get_package_from_available() can match wrong pkg

2017-08-30 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-21744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko resolved AMBARI-21744.
-
Resolution: Fixed

Committed
To https://git-wip-us.apache.org/repos/asf/ambari.git
   39bc724e87..890a5f584c  branch-2.6 -> branch-2.6
   4238781810..5e399daeb3  trunk -> trunk

> package_regex in get_package_from_available() can match wrong pkg
> -
>
> Key: AMBARI-21744
> URL: https://issues.apache.org/jira/browse/AMBARI-21744
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Critical
> Fix For: 2.6.0
>
> Attachments: AMBARI-21744.patch
>
>
> Due to the issue with regex (missing ^ and $ boundaries), 
> resource_management.libraries.script.script.Script#get_package_from_available 
> may return wrong package.
> {code}
> >>> list=['hbase_3_0_0_0_229-master', 'hbase_3_0_0_0_229']
> >>> if re.match('hbase_(\d|_)+', 'hbase_3_0_0_0_229-master'):
> ...print 'YES'
> ...
> YES
> >>> if re.match('hbase_(\d|_)+', 'hbase_3_0_0_0_229'):
> ...print 'YES'
> ...
> YES
> {code}
> In this case, the first package name from a list of available packages will 
> be returned.
> The impact of bug is that we may install a wrong package if it's simillary 
> named and goes first at list. Patch is a single-line fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22158) Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and using oracle as database

2017-10-06 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22158:
---

 Summary: Ambari schema upgrade fails when upgrading ambari from 
2.5.1.0 to 2.6.0.0 and using oracle as database
 Key: AMBARI-22158
 URL: https://issues.apache.org/jira/browse/AMBARI-22158
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Blocker



While upgrading ambari from 2.5.1.0 to 2.6.0.0 using oracle database, schema 
upgrade fails with the below exception
{code:None}
05 Oct 2017 08:23:00,312  INFO [main] DBAccessorImpl:874 - Executing query: 
ALTER TABLE VIEWURL DROP PRIMARY KEY
05 Oct 2017 08:23:00,342 ERROR [main] DBAccessorImpl:880 - Error executing 
query: ALTER TABLE VIEWURL DROP PRIMARY KEY
java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
referenced by some foreign keys

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
at 
oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
at 
org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
at 
org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
at 
org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
at 
org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
at 
org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
at 
org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
at 
org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
05 Oct 2017 08:23:00,347 ERROR [main] SchemaUpgradeHelper:202 - Upgrade failed.
java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
referenced by some foreign keys

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
at 
oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
at 
oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
at 
org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
at 
org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
at 
org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
at 
org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
at 
org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
at 
org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
at 
org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
05 Oct 2017 08:23:00,347 ERROR [main] SchemaUpgradeHelper:437 - Exception 
occurred during upgrade, failed
org.apache.ambari.server.AmbariException: ORA-02273: this unique/primary key is 
referenced by some foreign keys

at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
Caused by: 

[jira] [Updated] (AMBARI-22158) Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and using oracle as database

2017-10-06 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22158:

Component/s: ambari-server

> Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and 
> using oracle as database
> --
>
> Key: AMBARI-22158
> URL: https://issues.apache.org/jira/browse/AMBARI-22158
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>
> While upgrading ambari from 2.5.1.0 to 2.6.0.0 using oracle database, schema 
> upgrade fails with the below exception
> {code:None}
> 05 Oct 2017 08:23:00,312  INFO [main] DBAccessorImpl:874 - Executing query: 
> ALTER TABLE VIEWURL DROP PRIMARY KEY
> 05 Oct 2017 08:23:00,342 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: ALTER TABLE VIEWURL DROP PRIMARY KEY
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> 05 Oct 2017 08:23:00,347 ERROR [main] SchemaUpgradeHelper:202 - Upgrade 
> failed.
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)

[jira] [Updated] (AMBARI-22158) Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and using oracle as database

2017-10-06 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22158:

Attachment: AMBARI-22158.patch

> Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and 
> using oracle as database
> --
>
> Key: AMBARI-22158
> URL: https://issues.apache.org/jira/browse/AMBARI-22158
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22158.patch
>
>
> While upgrading ambari from 2.5.1.0 to 2.6.0.0 using oracle database, schema 
> upgrade fails with the below exception
> {code:None}
> 05 Oct 2017 08:23:00,312  INFO [main] DBAccessorImpl:874 - Executing query: 
> ALTER TABLE VIEWURL DROP PRIMARY KEY
> 05 Oct 2017 08:23:00,342 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: ALTER TABLE VIEWURL DROP PRIMARY KEY
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> 05 Oct 2017 08:23:00,347 ERROR [main] SchemaUpgradeHelper:202 - Upgrade 
> failed.
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> at 
> 

[jira] [Updated] (AMBARI-22158) Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and using oracle as database

2017-10-06 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22158:

Status: Patch Available  (was: Open)

> Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and 
> using oracle as database
> --
>
> Key: AMBARI-22158
> URL: https://issues.apache.org/jira/browse/AMBARI-22158
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22158.patch
>
>
> While upgrading ambari from 2.5.1.0 to 2.6.0.0 using oracle database, schema 
> upgrade fails with the below exception
> {code:None}
> 05 Oct 2017 08:23:00,312  INFO [main] DBAccessorImpl:874 - Executing query: 
> ALTER TABLE VIEWURL DROP PRIMARY KEY
> 05 Oct 2017 08:23:00,342 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: ALTER TABLE VIEWURL DROP PRIMARY KEY
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> 05 Oct 2017 08:23:00,347 ERROR [main] SchemaUpgradeHelper:202 - Upgrade 
> failed.
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> at 
> 

[jira] [Updated] (AMBARI-22158) Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and using oracle as database

2017-10-06 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22158:

Affects Version/s: 2.6.0

> Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and 
> using oracle as database
> --
>
> Key: AMBARI-22158
> URL: https://issues.apache.org/jira/browse/AMBARI-22158
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: AMBARI-22158.patch
>
>
> While upgrading ambari from 2.5.1.0 to 2.6.0.0 using oracle database, schema 
> upgrade fails with the below exception
> {code:None}
> 05 Oct 2017 08:23:00,312  INFO [main] DBAccessorImpl:874 - Executing query: 
> ALTER TABLE VIEWURL DROP PRIMARY KEY
> 05 Oct 2017 08:23:00,342 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: ALTER TABLE VIEWURL DROP PRIMARY KEY
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> 05 Oct 2017 08:23:00,347 ERROR [main] SchemaUpgradeHelper:202 - Upgrade 
> failed.
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> 

[jira] [Updated] (AMBARI-22158) Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and using oracle as database

2017-10-06 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22158:

Fix Version/s: 2.6.0

> Ambari schema upgrade fails when upgrading ambari from 2.5.1.0 to 2.6.0.0 and 
> using oracle as database
> --
>
> Key: AMBARI-22158
> URL: https://issues.apache.org/jira/browse/AMBARI-22158
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: AMBARI-22158.patch
>
>
> While upgrading ambari from 2.5.1.0 to 2.6.0.0 using oracle database, schema 
> upgrade fails with the below exception
> {code:None}
> 05 Oct 2017 08:23:00,312  INFO [main] DBAccessorImpl:874 - Executing query: 
> ALTER TABLE VIEWURL DROP PRIMARY KEY
> 05 Oct 2017 08:23:00,342 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: ALTER TABLE VIEWURL DROP PRIMARY KEY
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> 05 Oct 2017 08:23:00,347 ERROR [main] SchemaUpgradeHelper:202 - Upgrade 
> failed.
> java.sql.SQLSyntaxErrorException: ORA-02273: this unique/primary key is 
> referenced by some foreign keys
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
> at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
> at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
> at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
> at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
> at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
> at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
> at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999)
> at 
> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
> at 
> oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890)
> at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855)
> at 
> oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1045)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1053)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.dropPKConstraint(DBAccessorImpl.java:1344)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.addViewUrlPKConstraint(UpgradeCatalog260.java:206)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:196)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> 

[jira] [Updated] (AMBARI-22213) "ambari-server upgrade" failed on db schema [Upgrade]

2017-10-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22213:

Attachment: AMBARI-22213.patch

> "ambari-server upgrade" failed on db schema [Upgrade]
> -
>
> Key: AMBARI-22213
> URL: https://issues.apache.org/jira/browse/AMBARI-22213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22213.patch
>
>
> Manual check:
> {code}
> tr-e134-1499953498516-213280-01-12:~ # ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> INFO: Upgrade Ambari Server
> INFO: Updating Ambari Server properties in ambari.properties ...
> WARNING: Can not find ambari.properties.rpmsave file from previous version, 
> skipping import of settings
> INFO: Updating Ambari Server properties in ambari-env.sh ...
> INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping 
> restore of environment settings. ambari-env.sh may not include any user 
> customization.
> INFO: Fixing database objects owner
> Ambari Server configured for Postgres. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> INFO: Upgrading database schema
> INFO: Return code from schema upgrade command, retcode = 1
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> ERROR: relation "servicecomponent_history" does not exist
> Position: 13
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.postgresql.util.PSQLException: ERROR: relation 
> "servicecomponent_history" does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {code}
> {code}
> 11 Oct 2017 13:49:16,918 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: DELETE FROM servicecomponent_history
> org.postgresql.util.PSQLException: ERROR: relation "servicecomponent_history" 
> does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
> at 
> 

[jira] [Updated] (AMBARI-22213) "ambari-server upgrade" failed on db schema [Upgrade]

2017-10-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22213:

Fix Version/s: 2.6.0

> "ambari-server upgrade" failed on db schema [Upgrade]
> -
>
> Key: AMBARI-22213
> URL: https://issues.apache.org/jira/browse/AMBARI-22213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: AMBARI-22213.patch
>
>
> Manual check:
> {code}
> tr-e134-1499953498516-213280-01-12:~ # ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> INFO: Upgrade Ambari Server
> INFO: Updating Ambari Server properties in ambari.properties ...
> WARNING: Can not find ambari.properties.rpmsave file from previous version, 
> skipping import of settings
> INFO: Updating Ambari Server properties in ambari-env.sh ...
> INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping 
> restore of environment settings. ambari-env.sh may not include any user 
> customization.
> INFO: Fixing database objects owner
> Ambari Server configured for Postgres. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> INFO: Upgrading database schema
> INFO: Return code from schema upgrade command, retcode = 1
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> ERROR: relation "servicecomponent_history" does not exist
> Position: 13
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.postgresql.util.PSQLException: ERROR: relation 
> "servicecomponent_history" does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {code}
> {code}
> 11 Oct 2017 13:49:16,918 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: DELETE FROM servicecomponent_history
> org.postgresql.util.PSQLException: ERROR: relation "servicecomponent_history" 
> does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> 

[jira] [Updated] (AMBARI-22213) "ambari-server upgrade" failed on db schema [Upgrade]

2017-10-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22213:

Affects Version/s: 2.6.0

> "ambari-server upgrade" failed on db schema [Upgrade]
> -
>
> Key: AMBARI-22213
> URL: https://issues.apache.org/jira/browse/AMBARI-22213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: AMBARI-22213.patch
>
>
> Manual check:
> {code}
> tr-e134-1499953498516-213280-01-12:~ # ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> INFO: Upgrade Ambari Server
> INFO: Updating Ambari Server properties in ambari.properties ...
> WARNING: Can not find ambari.properties.rpmsave file from previous version, 
> skipping import of settings
> INFO: Updating Ambari Server properties in ambari-env.sh ...
> INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping 
> restore of environment settings. ambari-env.sh may not include any user 
> customization.
> INFO: Fixing database objects owner
> Ambari Server configured for Postgres. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> INFO: Upgrading database schema
> INFO: Return code from schema upgrade command, retcode = 1
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> ERROR: relation "servicecomponent_history" does not exist
> Position: 13
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.postgresql.util.PSQLException: ERROR: relation 
> "servicecomponent_history" does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {code}
> {code}
> 11 Oct 2017 13:49:16,918 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: DELETE FROM servicecomponent_history
> org.postgresql.util.PSQLException: ERROR: relation "servicecomponent_history" 
> does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> 

[jira] [Created] (AMBARI-22213) "ambari-server upgrade" failed on db schema [Upgrade]

2017-10-11 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22213:
---

 Summary: "ambari-server upgrade" failed on db schema [Upgrade]
 Key: AMBARI-22213
 URL: https://issues.apache.org/jira/browse/AMBARI-22213
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Blocker




Manual check:
{code}
tr-e134-1499953498516-213280-01-12:~ # ambari-server upgrade
Using python  /usr/bin/python
Upgrading ambari-server
INFO: Upgrade Ambari Server
INFO: Updating Ambari Server properties in ambari.properties ...
WARNING: Can not find ambari.properties.rpmsave file from previous version, 
skipping import of settings
INFO: Updating Ambari Server properties in ambari-env.sh ...
INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping 
restore of environment settings. ambari-env.sh may not include any user 
customization.
INFO: Fixing database objects owner
Ambari Server configured for Postgres. Confirm you have made a backup of the 
Ambari Server database [y/n] (y)?
INFO: Upgrading database schema
INFO: Return code from schema upgrade command, retcode = 1
ERROR: Error executing schema upgrade, please check the server logs.
ERROR: Error output from schema upgrade command:
ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
ERROR: relation "servicecomponent_history" does not exist
Position: 13
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
Caused by: org.postgresql.util.PSQLException: ERROR: relation 
"servicecomponent_history" does not exist
Position: 13
at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
at 
org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
at 
org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
at 
org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
at 
org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
at 
org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
at 
org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
... 1 more


ERROR: Ambari server upgrade failed. Please look at 
/var/log/ambari-server/ambari-server.log, for more details.
ERROR: Exiting with exit code 11.
REASON: Schema upgrade failed.
{code}
{code}
11 Oct 2017 13:49:16,918 ERROR [main] DBAccessorImpl:880 - Error executing 
query: DELETE FROM servicecomponent_history
org.postgresql.util.PSQLException: ERROR: relation "servicecomponent_history" 
does not exist
Position: 13
at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at 
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
at 
org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
at 
org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
at 
org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
at 
org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
at 
org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
at 
org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
at 
org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
11 Oct 2017 13:49:16,920 ERROR [main] SchemaUpgradeHelper:202 - Upgrade failed.
org.postgresql.util.PSQLException: ERROR: relation "servicecomponent_history" 
does not exist
Position: 13
at 

[jira] [Updated] (AMBARI-22213) "ambari-server upgrade" failed on db schema [Upgrade]

2017-10-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22213:

Component/s: ambari-server

> "ambari-server upgrade" failed on db schema [Upgrade]
> -
>
> Key: AMBARI-22213
> URL: https://issues.apache.org/jira/browse/AMBARI-22213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>
> Manual check:
> {code}
> tr-e134-1499953498516-213280-01-12:~ # ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> INFO: Upgrade Ambari Server
> INFO: Updating Ambari Server properties in ambari.properties ...
> WARNING: Can not find ambari.properties.rpmsave file from previous version, 
> skipping import of settings
> INFO: Updating Ambari Server properties in ambari-env.sh ...
> INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping 
> restore of environment settings. ambari-env.sh may not include any user 
> customization.
> INFO: Fixing database objects owner
> Ambari Server configured for Postgres. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> INFO: Upgrading database schema
> INFO: Return code from schema upgrade command, retcode = 1
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> ERROR: relation "servicecomponent_history" does not exist
> Position: 13
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.postgresql.util.PSQLException: ERROR: relation 
> "servicecomponent_history" does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {code}
> {code}
> 11 Oct 2017 13:49:16,918 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: DELETE FROM servicecomponent_history
> org.postgresql.util.PSQLException: ERROR: relation "servicecomponent_history" 
> does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
> at 
> 

[jira] [Updated] (AMBARI-22213) "ambari-server upgrade" failed on db schema [Upgrade]

2017-10-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22213:

Status: Patch Available  (was: Open)

> "ambari-server upgrade" failed on db schema [Upgrade]
> -
>
> Key: AMBARI-22213
> URL: https://issues.apache.org/jira/browse/AMBARI-22213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22213.patch
>
>
> Manual check:
> {code}
> tr-e134-1499953498516-213280-01-12:~ # ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> INFO: Upgrade Ambari Server
> INFO: Updating Ambari Server properties in ambari.properties ...
> WARNING: Can not find ambari.properties.rpmsave file from previous version, 
> skipping import of settings
> INFO: Updating Ambari Server properties in ambari-env.sh ...
> INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping 
> restore of environment settings. ambari-env.sh may not include any user 
> customization.
> INFO: Fixing database objects owner
> Ambari Server configured for Postgres. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> INFO: Upgrading database schema
> INFO: Return code from schema upgrade command, retcode = 1
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> ERROR: relation "servicecomponent_history" does not exist
> Position: 13
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.postgresql.util.PSQLException: ERROR: relation 
> "servicecomponent_history" does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
> ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {code}
> {code}
> 11 Oct 2017 13:49:16,918 ERROR [main] DBAccessorImpl:880 - Error executing 
> query: DELETE FROM servicecomponent_history
> org.postgresql.util.PSQLException: ERROR: relation "servicecomponent_history" 
> does not exist
> Position: 13
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:395)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:877)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.executeQuery(DBAccessorImpl.java:869)
> at 
> org.apache.ambari.server.orm.DBAccessorImpl.clearTable(DBAccessorImpl.java:1500)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:169)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122)
> at 
> 

[jira] [Updated] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-13 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22387:

Affects Version/s: (was: 2.6.0)
   2.6.1

> Create a Pre-Upgrade Check Warning About LZO
> 
>
> Key: AMBARI-22387
> URL: https://issues.apache.org/jira/browse/AMBARI-22387
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22387.patch
>
>
> Ambari has removed its native support of distributing and installing LZO when 
> the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO 
> is enabled, this means that performing an upgrade will now require manual 
> user intervention to get the LZO packages installed.
> A pre-upgrade check should be created which checks to see if LZO is enabled 
> in the cluster and then produces a {{WARNING}} to the user letting them know 
> that before upgrading, they'd need to distribute the appropriate LZO packages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-13 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22387:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

To https://git-wip-us.apache.org/repos/asf/ambari.git
   a9af58a50f..3153e2d1b3  branch-2.6 -> branch-2.6
   5122671d00..76349ac20d  trunk -> trunk


> Create a Pre-Upgrade Check Warning About LZO
> 
>
> Key: AMBARI-22387
> URL: https://issues.apache.org/jira/browse/AMBARI-22387
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22387.patch
>
>
> Ambari has removed its native support of distributing and installing LZO when 
> the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO 
> is enabled, this means that performing an upgrade will now require manual 
> user intervention to get the LZO packages installed.
> A pre-upgrade check should be created which checks to see if LZO is enabled 
> in the cluster and then produces a {{WARNING}} to the user letting them know 
> that before upgrading, they'd need to distribute the appropriate LZO packages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-13 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22387:

Fix Version/s: 2.6.1

> Create a Pre-Upgrade Check Warning About LZO
> 
>
> Key: AMBARI-22387
> URL: https://issues.apache.org/jira/browse/AMBARI-22387
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22387.patch
>
>
> Ambari has removed its native support of distributing and installing LZO when 
> the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO 
> is enabled, this means that performing an upgrade will now require manual 
> user intervention to get the LZO packages installed.
> A pre-upgrade check should be created which checks to see if LZO is enabled 
> in the cluster and then produces a {{WARNING}} to the user letting them know 
> that before upgrading, they'd need to distribute the appropriate LZO packages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-13 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22387:

Affects Version/s: 2.6.0

> Create a Pre-Upgrade Check Warning About LZO
> 
>
> Key: AMBARI-22387
> URL: https://issues.apache.org/jira/browse/AMBARI-22387
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22387.patch
>
>
> Ambari has removed its native support of distributing and installing LZO when 
> the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO 
> is enabled, this means that performing an upgrade will now require manual 
> user intervention to get the LZO packages installed.
> A pre-upgrade check should be created which checks to see if LZO is enabled 
> in the cluster and then produces a {{WARNING}} to the user letting them know 
> that before upgrading, they'd need to distribute the appropriate LZO packages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22500) Modify AMBARI-22387 to Check for LZO + No Opt-in

2017-11-22 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22500:

Attachment: AMBARI-22500.patch

> Modify AMBARI-22387 to Check for LZO + No Opt-in
> 
>
> Key: AMBARI-22500
> URL: https://issues.apache.org/jira/browse/AMBARI-22500
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22500.patch
>
>
> The work for AMBARI-22387 was initially created to warn the user before 
> performing an upgrade that LZO was enabled and would need to be managed on 
> their own. However, with AMBARI-22457, we now support LZO when a user opts-in 
> to installed GPL-licensed code.
> The warning in AMBARI-22387 should be changed to check for LZO being enabled 
> but the user _NOT_ opt-ing in



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22469) Ambari upgrade failed

2017-11-22 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22469:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed
To https://git-wip-us.apache.org/repos/asf/ambari.git
   f769309..92e362b  branch-2.6 -> branch-2.6
   5dd334c..b1acd1d  trunk -> trunk


> Ambari upgrade failed
> -
>
> Key: AMBARI-22469
> URL: https://issues.apache.org/jira/browse/AMBARI-22469
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: amarnath reddy pappu
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22469.patch
>
>
> Ambari upgrade would fail for all Ambari view servers.
> Steps to reproduce:
> 1. Install Ambari 2.5.2 and setup it as view server. (it you don't set up it 
> up as view server also it fails)
> 2. now install 2.6.0
> 3. run ambari-server upgrade
> it fails out with below exception.
> {noformat}
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> Unable to find any CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.apache.ambari.server.AmbariException: Unable to find any 
> CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:510)
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194)
>   at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
>   ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {noformat}
> For some reason we are checking cluster_version table entries and throwing up 
> above exception.
> {noformat}
> In UpgradeCatalog260.java
> public int getCurrentVersionID() throws AmbariException, SQLException {
> List currentVersionList = 
> dbAccessor.getIntColumnValues(CLUSTER_VERSION_TABLE, REPO_VERSION_ID_COLUMN,
> new String[]{STATE_COLUMN}, new String[]{CURRENT}, false);
> if (currentVersionList.isEmpty()) {
>   throw new AmbariException("Unable to find any CURRENT repositories.");
> } else if (currentVersionList.size() != 1) {
>   throw new AmbariException("The following repositories were found to be 
> CURRENT: ".concat(StringUtils.join(currentVersionList, ",")));
> }
> return currentVersionList.get(0);
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22500) Modify AMBARI-22387 to Check for LZO + No Opt-in

2017-11-22 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22500:

Affects Version/s: 2.6.1

> Modify AMBARI-22387 to Check for LZO + No Opt-in
> 
>
> Key: AMBARI-22500
> URL: https://issues.apache.org/jira/browse/AMBARI-22500
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22500.2.branch-feature-AMBARI-22457.patch, 
> AMBARI-22500.patch
>
>
> The work for AMBARI-22387 was initially created to warn the user before 
> performing an upgrade that LZO was enabled and would need to be managed on 
> their own. However, with AMBARI-22457, we now support LZO when a user opts-in 
> to installed GPL-licensed code.
> The warning in AMBARI-22387 should be changed to check for LZO being enabled 
> but the user _NOT_ opt-ing in



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22500) Modify AMBARI-22387 to Check for LZO + No Opt-in

2017-11-22 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22500:

Fix Version/s: 2.6.1

> Modify AMBARI-22387 to Check for LZO + No Opt-in
> 
>
> Key: AMBARI-22500
> URL: https://issues.apache.org/jira/browse/AMBARI-22500
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22500.2.branch-feature-AMBARI-22457.patch, 
> AMBARI-22500.patch
>
>
> The work for AMBARI-22387 was initially created to warn the user before 
> performing an upgrade that LZO was enabled and would need to be managed on 
> their own. However, with AMBARI-22457, we now support LZO when a user opts-in 
> to installed GPL-licensed code.
> The warning in AMBARI-22387 should be changed to check for LZO being enabled 
> but the user _NOT_ opt-ing in



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22500) Modify AMBARI-22387 to Check for LZO + No Opt-in

2017-11-22 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22500:

Attachment: AMBARI-22500.2.branch-feature-AMBARI-22457.patch

> Modify AMBARI-22387 to Check for LZO + No Opt-in
> 
>
> Key: AMBARI-22500
> URL: https://issues.apache.org/jira/browse/AMBARI-22500
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22500.2.branch-feature-AMBARI-22457.patch, 
> AMBARI-22500.patch
>
>
> The work for AMBARI-22387 was initially created to warn the user before 
> performing an upgrade that LZO was enabled and would need to be managed on 
> their own. However, with AMBARI-22457, we now support LZO when a user opts-in 
> to installed GPL-licensed code.
> The warning in AMBARI-22387 should be changed to check for LZO being enabled 
> but the user _NOT_ opt-ing in



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22500) Modify AMBARI-22387 to Check for LZO + No Opt-in

2017-11-22 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22500:
---

 Summary: Modify AMBARI-22387 to Check for LZO + No Opt-in
 Key: AMBARI-22500
 URL: https://issues.apache.org/jira/browse/AMBARI-22500
 Project: Ambari
  Issue Type: Task
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Blocker



The work for AMBARI-22387 was initially created to warn the user before 
performing an upgrade that LZO was enabled and would need to be managed on 
their own. However, with AMBARI-22457, we now support LZO when a user opts-in 
to installed GPL-licensed code.

The warning in AMBARI-22387 should be changed to check for LZO being enabled 
but the user _NOT_ opt-ing in





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22500) Modify AMBARI-22387 to Check for LZO + No Opt-in

2017-11-22 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22500:

Status: Patch Available  (was: Open)

> Modify AMBARI-22387 to Check for LZO + No Opt-in
> 
>
> Key: AMBARI-22500
> URL: https://issues.apache.org/jira/browse/AMBARI-22500
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22500.patch
>
>
> The work for AMBARI-22387 was initially created to warn the user before 
> performing an upgrade that LZO was enabled and would need to be managed on 
> their own. However, with AMBARI-22457, we now support LZO when a user opts-in 
> to installed GPL-licensed code.
> The warning in AMBARI-22387 should be changed to check for LZO being enabled 
> but the user _NOT_ opt-ing in



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22500) Modify AMBARI-22387 to Check for LZO + No Opt-in

2017-11-22 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22500:

Component/s: ambari-server

> Modify AMBARI-22387 to Check for LZO + No Opt-in
> 
>
> Key: AMBARI-22500
> URL: https://issues.apache.org/jira/browse/AMBARI-22500
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22500.patch
>
>
> The work for AMBARI-22387 was initially created to warn the user before 
> performing an upgrade that LZO was enabled and would need to be managed on 
> their own. However, with AMBARI-22457, we now support LZO when a user opts-in 
> to installed GPL-licensed code.
> The warning in AMBARI-22387 should be changed to check for LZO being enabled 
> but the user _NOT_ opt-ing in



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22500) Modify AMBARI-22387 to Check for LZO + No Opt-in

2017-11-22 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22500:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed
To https://git-wip-us.apache.org/repos/asf/ambari.git
   aea1a3c..c0fbb86  branch-feature-AMBARI-22457 -> branch-feature-AMBARI-22457
   b1acd1d..cadbf35  trunk -> trunk


> Modify AMBARI-22387 to Check for LZO + No Opt-in
> 
>
> Key: AMBARI-22500
> URL: https://issues.apache.org/jira/browse/AMBARI-22500
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22500.2.branch-feature-AMBARI-22457.patch, 
> AMBARI-22500.patch
>
>
> The work for AMBARI-22387 was initially created to warn the user before 
> performing an upgrade that LZO was enabled and would need to be managed on 
> their own. However, with AMBARI-22457, we now support LZO when a user opts-in 
> to installed GPL-licensed code.
> The warning in AMBARI-22387 should be changed to check for LZO being enabled 
> but the user _NOT_ opt-ing in



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22558) Snapshot HBase task failed during IOP migration with TypeError

2017-11-30 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22558:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

To https://git-wip-us.apache.org/repos/asf/ambari.git
   41329e3..4b42857  branch-2.6 -> branch-2.6


> Snapshot HBase task failed during IOP migration with TypeError
> --
>
> Key: AMBARI-22558
> URL: https://issues.apache.org/jira/browse/AMBARI-22558
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22558.patch
>
>
> *STR*
> # Deployed cluster with Ambari version: 2.2.0 and IOP version: 4.2.0.0
> # Upgrade Ambari to Target Version: 2.5.2.0-298 | Hash: 
> 2453e16418fd964042452b649153dbe45f3c6009
> # Upgrade Ambari to  Target Version: 2.6.1.0-64 | Hash: 
> cd4db8e9ac0ea7ce14fc1253959a121688f34952
> # Register HDP-2.6.4.0-51 and call remove iop-select
> # Install the new HDP version bits and start Express Upgrade
> *Result*
> Snapshot HBase task failed with below error:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 157, in 
> ExecuteUpgradeTasks().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 153, in actionexecute
> shell.checked_call(task.command, logoutput=True, quiet=True)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'source 
> /var/lib/ambari-agent/ambari-env.sh ; /usr/bin/ambari-python-wrap 
> /var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py
>  take_snapshot /var/lib/ambari-agent/data/command-404.json 
> /var/lib/ambari-agent/cache/custom_actions 
> /var/lib/ambari-agent/data/structured-out-404.json INFO 
> /var/lib/ambari-agent/tmp' returned 1. 2017-11-29 07:01:50,049 - Stack 
> Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command 
> Version=4.2.0.0, Upgrade Direction=upgrade -> 4.2.0.0
> 2017-11-29 07:01:50,096 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 37, in 
> HbaseMasterUpgrade().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 28, in take_snapshot
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py",
>  line 107, in 
> regionserver_xmn_percent = 
> expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
> #AMBARI-15614
> TypeError: 'module' object is not callable
> {code}
> Upon checking the code 
> [here|https://github.com/hortonworks/ambari/blob/AMBARI-2.6.1.0/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py#L30],
>  the issue seems to be with import of 'expect' module
> I tried the following changes in params.py:
> {code}
> L30: from resource_management.libraries.functions import expect
> L107: regionserver_xmn_percent = 
> expect.expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", 
> float) #AMBARI-15614
> {code}
> Now the snapshot command ran fine:
> {code}
> 2017-11-29 11:14:15,472 - Stack Feature Version Info: Cluster Stack=2.6, 
> Command Stack=None, Command Version=4.2.0.0, Upgrade Direction=upgrade -> 
> 4.2.0.0
> 2017-11-29 11:14:15,474 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> 2017-11-29 11:14:15,476 - checked_call['hostid'] {}
> 2017-11-29 11:14:15,494 - checked_call returned (0, '16ace057')
> 2017-11-29 11:14:15,495 - Execute[' echo 'snapshot_all' | 
> 

[jira] [Updated] (AMBARI-22558) Snapshot HBase task failed during IOP migration with TypeError

2017-11-30 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22558:

Attachment: AMBARI-22558.patch

> Snapshot HBase task failed during IOP migration with TypeError
> --
>
> Key: AMBARI-22558
> URL: https://issues.apache.org/jira/browse/AMBARI-22558
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22558.patch
>
>
> *STR*
> # Deployed cluster with Ambari version: 2.2.0 and IOP version: 4.2.0.0
> # Upgrade Ambari to Target Version: 2.5.2.0-298 | Hash: 
> 2453e16418fd964042452b649153dbe45f3c6009
> # Upgrade Ambari to  Target Version: 2.6.1.0-64 | Hash: 
> cd4db8e9ac0ea7ce14fc1253959a121688f34952
> # Register HDP-2.6.4.0-51 and call remove iop-select
> # Install the new HDP version bits and start Express Upgrade
> *Result*
> Snapshot HBase task failed with below error:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 157, in 
> ExecuteUpgradeTasks().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 153, in actionexecute
> shell.checked_call(task.command, logoutput=True, quiet=True)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'source 
> /var/lib/ambari-agent/ambari-env.sh ; /usr/bin/ambari-python-wrap 
> /var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py
>  take_snapshot /var/lib/ambari-agent/data/command-404.json 
> /var/lib/ambari-agent/cache/custom_actions 
> /var/lib/ambari-agent/data/structured-out-404.json INFO 
> /var/lib/ambari-agent/tmp' returned 1. 2017-11-29 07:01:50,049 - Stack 
> Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command 
> Version=4.2.0.0, Upgrade Direction=upgrade -> 4.2.0.0
> 2017-11-29 07:01:50,096 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 37, in 
> HbaseMasterUpgrade().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 28, in take_snapshot
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py",
>  line 107, in 
> regionserver_xmn_percent = 
> expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
> #AMBARI-15614
> TypeError: 'module' object is not callable
> {code}
> Upon checking the code 
> [here|https://github.com/hortonworks/ambari/blob/AMBARI-2.6.1.0/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py#L30],
>  the issue seems to be with import of 'expect' module
> I tried the following changes in params.py:
> {code}
> L30: from resource_management.libraries.functions import expect
> L107: regionserver_xmn_percent = 
> expect.expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", 
> float) #AMBARI-15614
> {code}
> Now the snapshot command ran fine:
> {code}
> 2017-11-29 11:14:15,472 - Stack Feature Version Info: Cluster Stack=2.6, 
> Command Stack=None, Command Version=4.2.0.0, Upgrade Direction=upgrade -> 
> 4.2.0.0
> 2017-11-29 11:14:15,474 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> 2017-11-29 11:14:15,476 - checked_call['hostid'] {}
> 2017-11-29 11:14:15,494 - checked_call returned (0, '16ace057')
> 2017-11-29 11:14:15,495 - Execute[' echo 'snapshot_all' | 
> /usr/iop/current/hbase-client/bin/hbase shell'] {'user': 'hbase'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22558) Snapshot HBase task failed during IOP migration with TypeError

2017-11-30 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22558:
---

 Summary: Snapshot HBase task failed during IOP migration with 
TypeError
 Key: AMBARI-22558
 URL: https://issues.apache.org/jira/browse/AMBARI-22558
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Blocker



*STR*
# Deployed cluster with Ambari version: 2.2.0 and IOP version: 4.2.0.0
# Upgrade Ambari to Target Version: 2.5.2.0-298 | Hash: 
2453e16418fd964042452b649153dbe45f3c6009
# Upgrade Ambari to  Target Version: 2.6.1.0-64 | Hash: 
cd4db8e9ac0ea7ce14fc1253959a121688f34952
# Register HDP-2.6.4.0-51 and call remove iop-select
# Install the new HDP version bits and start Express Upgrade

*Result*
Snapshot HBase task failed with below error:
{code}
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
line 157, in 
ExecuteUpgradeTasks().execute()
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 367, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
line 153, in actionexecute
shell.checked_call(task.command, logoutput=True, quiet=True)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'source 
/var/lib/ambari-agent/ambari-env.sh ; /usr/bin/ambari-python-wrap 
/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py
 take_snapshot /var/lib/ambari-agent/data/command-404.json 
/var/lib/ambari-agent/cache/custom_actions 
/var/lib/ambari-agent/data/structured-out-404.json INFO 
/var/lib/ambari-agent/tmp' returned 1. 2017-11-29 07:01:50,049 - Stack Feature 
Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=4.2.0.0, 
Upgrade Direction=upgrade -> 4.2.0.0
2017-11-29 07:01:50,096 - Using hadoop conf dir: 
/usr/hdp/current/hadoop-client/conf
Traceback (most recent call last):
File 
"/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
 line 37, in 
HbaseMasterUpgrade().execute()
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 367, in execute
method(env)
File 
"/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
 line 28, in take_snapshot
import params
File 
"/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py",
 line 107, in 
regionserver_xmn_percent = 
expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
#AMBARI-15614
TypeError: 'module' object is not callable
{code}


Upon checking the code 
[here|https://github.com/hortonworks/ambari/blob/AMBARI-2.6.1.0/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py#L30],
 the issue seems to be with import of 'expect' module

I tried the following changes in params.py:
{code}
L30: from resource_management.libraries.functions import expect
L107: regionserver_xmn_percent = 
expect.expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
#AMBARI-15614
{code}

Now the snapshot command ran fine:
{code}
2017-11-29 11:14:15,472 - Stack Feature Version Info: Cluster Stack=2.6, 
Command Stack=None, Command Version=4.2.0.0, Upgrade Direction=upgrade -> 
4.2.0.0
2017-11-29 11:14:15,474 - Using hadoop conf dir: 
/usr/hdp/current/hadoop-client/conf
2017-11-29 11:14:15,476 - checked_call['hostid'] {}
2017-11-29 11:14:15,494 - checked_call returned (0, '16ace057')
2017-11-29 11:14:15,495 - Execute[' echo 'snapshot_all' | 
/usr/iop/current/hbase-client/bin/hbase shell'] {'user': 'hbase'}
{code}






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22558) Snapshot HBase task failed during IOP migration with TypeError

2017-11-30 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22558:

Fix Version/s: (was: 2ю6ю1)
   2.6.1

> Snapshot HBase task failed during IOP migration with TypeError
> --
>
> Key: AMBARI-22558
> URL: https://issues.apache.org/jira/browse/AMBARI-22558
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22558.patch
>
>
> *STR*
> # Deployed cluster with Ambari version: 2.2.0 and IOP version: 4.2.0.0
> # Upgrade Ambari to Target Version: 2.5.2.0-298 | Hash: 
> 2453e16418fd964042452b649153dbe45f3c6009
> # Upgrade Ambari to  Target Version: 2.6.1.0-64 | Hash: 
> cd4db8e9ac0ea7ce14fc1253959a121688f34952
> # Register HDP-2.6.4.0-51 and call remove iop-select
> # Install the new HDP version bits and start Express Upgrade
> *Result*
> Snapshot HBase task failed with below error:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 157, in 
> ExecuteUpgradeTasks().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 153, in actionexecute
> shell.checked_call(task.command, logoutput=True, quiet=True)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'source 
> /var/lib/ambari-agent/ambari-env.sh ; /usr/bin/ambari-python-wrap 
> /var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py
>  take_snapshot /var/lib/ambari-agent/data/command-404.json 
> /var/lib/ambari-agent/cache/custom_actions 
> /var/lib/ambari-agent/data/structured-out-404.json INFO 
> /var/lib/ambari-agent/tmp' returned 1. 2017-11-29 07:01:50,049 - Stack 
> Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command 
> Version=4.2.0.0, Upgrade Direction=upgrade -> 4.2.0.0
> 2017-11-29 07:01:50,096 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 37, in 
> HbaseMasterUpgrade().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 28, in take_snapshot
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py",
>  line 107, in 
> regionserver_xmn_percent = 
> expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
> #AMBARI-15614
> TypeError: 'module' object is not callable
> {code}
> Upon checking the code 
> [here|https://github.com/hortonworks/ambari/blob/AMBARI-2.6.1.0/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py#L30],
>  the issue seems to be with import of 'expect' module
> I tried the following changes in params.py:
> {code}
> L30: from resource_management.libraries.functions import expect
> L107: regionserver_xmn_percent = 
> expect.expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", 
> float) #AMBARI-15614
> {code}
> Now the snapshot command ran fine:
> {code}
> 2017-11-29 11:14:15,472 - Stack Feature Version Info: Cluster Stack=2.6, 
> Command Stack=None, Command Version=4.2.0.0, Upgrade Direction=upgrade -> 
> 4.2.0.0
> 2017-11-29 11:14:15,474 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> 2017-11-29 11:14:15,476 - checked_call['hostid'] {}
> 2017-11-29 11:14:15,494 - checked_call returned (0, '16ace057')
> 2017-11-29 11:14:15,495 - Execute[' echo 'snapshot_all' | 
> /usr/iop/current/hbase-client/bin/hbase shell'] {'user': 'hbase'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22558) Snapshot HBase task failed during IOP migration with TypeError

2017-11-30 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22558:

Component/s: ambari-server

> Snapshot HBase task failed during IOP migration with TypeError
> --
>
> Key: AMBARI-22558
> URL: https://issues.apache.org/jira/browse/AMBARI-22558
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22558.patch
>
>
> *STR*
> # Deployed cluster with Ambari version: 2.2.0 and IOP version: 4.2.0.0
> # Upgrade Ambari to Target Version: 2.5.2.0-298 | Hash: 
> 2453e16418fd964042452b649153dbe45f3c6009
> # Upgrade Ambari to  Target Version: 2.6.1.0-64 | Hash: 
> cd4db8e9ac0ea7ce14fc1253959a121688f34952
> # Register HDP-2.6.4.0-51 and call remove iop-select
> # Install the new HDP version bits and start Express Upgrade
> *Result*
> Snapshot HBase task failed with below error:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 157, in 
> ExecuteUpgradeTasks().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 153, in actionexecute
> shell.checked_call(task.command, logoutput=True, quiet=True)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'source 
> /var/lib/ambari-agent/ambari-env.sh ; /usr/bin/ambari-python-wrap 
> /var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py
>  take_snapshot /var/lib/ambari-agent/data/command-404.json 
> /var/lib/ambari-agent/cache/custom_actions 
> /var/lib/ambari-agent/data/structured-out-404.json INFO 
> /var/lib/ambari-agent/tmp' returned 1. 2017-11-29 07:01:50,049 - Stack 
> Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command 
> Version=4.2.0.0, Upgrade Direction=upgrade -> 4.2.0.0
> 2017-11-29 07:01:50,096 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 37, in 
> HbaseMasterUpgrade().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 28, in take_snapshot
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py",
>  line 107, in 
> regionserver_xmn_percent = 
> expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
> #AMBARI-15614
> TypeError: 'module' object is not callable
> {code}
> Upon checking the code 
> [here|https://github.com/hortonworks/ambari/blob/AMBARI-2.6.1.0/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py#L30],
>  the issue seems to be with import of 'expect' module
> I tried the following changes in params.py:
> {code}
> L30: from resource_management.libraries.functions import expect
> L107: regionserver_xmn_percent = 
> expect.expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", 
> float) #AMBARI-15614
> {code}
> Now the snapshot command ran fine:
> {code}
> 2017-11-29 11:14:15,472 - Stack Feature Version Info: Cluster Stack=2.6, 
> Command Stack=None, Command Version=4.2.0.0, Upgrade Direction=upgrade -> 
> 4.2.0.0
> 2017-11-29 11:14:15,474 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> 2017-11-29 11:14:15,476 - checked_call['hostid'] {}
> 2017-11-29 11:14:15,494 - checked_call returned (0, '16ace057')
> 2017-11-29 11:14:15,495 - Execute[' echo 'snapshot_all' | 
> /usr/iop/current/hbase-client/bin/hbase shell'] {'user': 'hbase'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22558) Snapshot HBase task failed during IOP migration with TypeError

2017-11-30 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22558:

Fix Version/s: 2ю6ю1

> Snapshot HBase task failed during IOP migration with TypeError
> --
>
> Key: AMBARI-22558
> URL: https://issues.apache.org/jira/browse/AMBARI-22558
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22558.patch
>
>
> *STR*
> # Deployed cluster with Ambari version: 2.2.0 and IOP version: 4.2.0.0
> # Upgrade Ambari to Target Version: 2.5.2.0-298 | Hash: 
> 2453e16418fd964042452b649153dbe45f3c6009
> # Upgrade Ambari to  Target Version: 2.6.1.0-64 | Hash: 
> cd4db8e9ac0ea7ce14fc1253959a121688f34952
> # Register HDP-2.6.4.0-51 and call remove iop-select
> # Install the new HDP version bits and start Express Upgrade
> *Result*
> Snapshot HBase task failed with below error:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 157, in 
> ExecuteUpgradeTasks().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 153, in actionexecute
> shell.checked_call(task.command, logoutput=True, quiet=True)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'source 
> /var/lib/ambari-agent/ambari-env.sh ; /usr/bin/ambari-python-wrap 
> /var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py
>  take_snapshot /var/lib/ambari-agent/data/command-404.json 
> /var/lib/ambari-agent/cache/custom_actions 
> /var/lib/ambari-agent/data/structured-out-404.json INFO 
> /var/lib/ambari-agent/tmp' returned 1. 2017-11-29 07:01:50,049 - Stack 
> Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command 
> Version=4.2.0.0, Upgrade Direction=upgrade -> 4.2.0.0
> 2017-11-29 07:01:50,096 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 37, in 
> HbaseMasterUpgrade().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 28, in take_snapshot
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py",
>  line 107, in 
> regionserver_xmn_percent = 
> expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
> #AMBARI-15614
> TypeError: 'module' object is not callable
> {code}
> Upon checking the code 
> [here|https://github.com/hortonworks/ambari/blob/AMBARI-2.6.1.0/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py#L30],
>  the issue seems to be with import of 'expect' module
> I tried the following changes in params.py:
> {code}
> L30: from resource_management.libraries.functions import expect
> L107: regionserver_xmn_percent = 
> expect.expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", 
> float) #AMBARI-15614
> {code}
> Now the snapshot command ran fine:
> {code}
> 2017-11-29 11:14:15,472 - Stack Feature Version Info: Cluster Stack=2.6, 
> Command Stack=None, Command Version=4.2.0.0, Upgrade Direction=upgrade -> 
> 4.2.0.0
> 2017-11-29 11:14:15,474 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> 2017-11-29 11:14:15,476 - checked_call['hostid'] {}
> 2017-11-29 11:14:15,494 - checked_call returned (0, '16ace057')
> 2017-11-29 11:14:15,495 - Execute[' echo 'snapshot_all' | 
> /usr/iop/current/hbase-client/bin/hbase shell'] {'user': 'hbase'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22558) Snapshot HBase task failed during IOP migration with TypeError

2017-11-30 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22558:

Status: Patch Available  (was: Open)

> Snapshot HBase task failed during IOP migration with TypeError
> --
>
> Key: AMBARI-22558
> URL: https://issues.apache.org/jira/browse/AMBARI-22558
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22558.patch
>
>
> *STR*
> # Deployed cluster with Ambari version: 2.2.0 and IOP version: 4.2.0.0
> # Upgrade Ambari to Target Version: 2.5.2.0-298 | Hash: 
> 2453e16418fd964042452b649153dbe45f3c6009
> # Upgrade Ambari to  Target Version: 2.6.1.0-64 | Hash: 
> cd4db8e9ac0ea7ce14fc1253959a121688f34952
> # Register HDP-2.6.4.0-51 and call remove iop-select
> # Install the new HDP version bits and start Express Upgrade
> *Result*
> Snapshot HBase task failed with below error:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 157, in 
> ExecuteUpgradeTasks().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 153, in actionexecute
> shell.checked_call(task.command, logoutput=True, quiet=True)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'source 
> /var/lib/ambari-agent/ambari-env.sh ; /usr/bin/ambari-python-wrap 
> /var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py
>  take_snapshot /var/lib/ambari-agent/data/command-404.json 
> /var/lib/ambari-agent/cache/custom_actions 
> /var/lib/ambari-agent/data/structured-out-404.json INFO 
> /var/lib/ambari-agent/tmp' returned 1. 2017-11-29 07:01:50,049 - Stack 
> Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command 
> Version=4.2.0.0, Upgrade Direction=upgrade -> 4.2.0.0
> 2017-11-29 07:01:50,096 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 37, in 
> HbaseMasterUpgrade().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 28, in take_snapshot
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py",
>  line 107, in 
> regionserver_xmn_percent = 
> expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
> #AMBARI-15614
> TypeError: 'module' object is not callable
> {code}
> Upon checking the code 
> [here|https://github.com/hortonworks/ambari/blob/AMBARI-2.6.1.0/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py#L30],
>  the issue seems to be with import of 'expect' module
> I tried the following changes in params.py:
> {code}
> L30: from resource_management.libraries.functions import expect
> L107: regionserver_xmn_percent = 
> expect.expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", 
> float) #AMBARI-15614
> {code}
> Now the snapshot command ran fine:
> {code}
> 2017-11-29 11:14:15,472 - Stack Feature Version Info: Cluster Stack=2.6, 
> Command Stack=None, Command Version=4.2.0.0, Upgrade Direction=upgrade -> 
> 4.2.0.0
> 2017-11-29 11:14:15,474 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> 2017-11-29 11:14:15,476 - checked_call['hostid'] {}
> 2017-11-29 11:14:15,494 - checked_call returned (0, '16ace057')
> 2017-11-29 11:14:15,495 - Execute[' echo 'snapshot_all' | 
> /usr/iop/current/hbase-client/bin/hbase shell'] {'user': 'hbase'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22558) Snapshot HBase task failed during IOP migration with TypeError

2017-11-30 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22558:

Affects Version/s: 2.6.1

> Snapshot HBase task failed during IOP migration with TypeError
> --
>
> Key: AMBARI-22558
> URL: https://issues.apache.org/jira/browse/AMBARI-22558
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22558.patch
>
>
> *STR*
> # Deployed cluster with Ambari version: 2.2.0 and IOP version: 4.2.0.0
> # Upgrade Ambari to Target Version: 2.5.2.0-298 | Hash: 
> 2453e16418fd964042452b649153dbe45f3c6009
> # Upgrade Ambari to  Target Version: 2.6.1.0-64 | Hash: 
> cd4db8e9ac0ea7ce14fc1253959a121688f34952
> # Register HDP-2.6.4.0-51 and call remove iop-select
> # Install the new HDP version bits and start Express Upgrade
> *Result*
> Snapshot HBase task failed with below error:
> {code}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 157, in 
> ExecuteUpgradeTasks().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/custom_actions/scripts/ru_execute_tasks.py", 
> line 153, in actionexecute
> shell.checked_call(task.command, logoutput=True, quiet=True)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 72, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 102, in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'source 
> /var/lib/ambari-agent/ambari-env.sh ; /usr/bin/ambari-python-wrap 
> /var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py
>  take_snapshot /var/lib/ambari-agent/data/command-404.json 
> /var/lib/ambari-agent/cache/custom_actions 
> /var/lib/ambari-agent/data/structured-out-404.json INFO 
> /var/lib/ambari-agent/tmp' returned 1. 2017-11-29 07:01:50,049 - Stack 
> Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command 
> Version=4.2.0.0, Upgrade Direction=upgrade -> 4.2.0.0
> 2017-11-29 07:01:50,096 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 37, in 
> HbaseMasterUpgrade().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 367, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/hbase_upgrade.py",
>  line 28, in take_snapshot
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py",
>  line 107, in 
> regionserver_xmn_percent = 
> expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", float) 
> #AMBARI-15614
> TypeError: 'module' object is not callable
> {code}
> Upon checking the code 
> [here|https://github.com/hortonworks/ambari/blob/AMBARI-2.6.1.0/ambari-server/src/main/resources/stacks/BigInsights/4.2/services/HBASE/package/scripts/params.py#L30],
>  the issue seems to be with import of 'expect' module
> I tried the following changes in params.py:
> {code}
> L30: from resource_management.libraries.functions import expect
> L107: regionserver_xmn_percent = 
> expect.expect("/configurations/hbase-env/hbase_regionserver_xmn_ratio", 
> float) #AMBARI-15614
> {code}
> Now the snapshot command ran fine:
> {code}
> 2017-11-29 11:14:15,472 - Stack Feature Version Info: Cluster Stack=2.6, 
> Command Stack=None, Command Version=4.2.0.0, Upgrade Direction=upgrade -> 
> 4.2.0.0
> 2017-11-29 11:14:15,474 - Using hadoop conf dir: 
> /usr/hdp/current/hadoop-client/conf
> 2017-11-29 11:14:15,476 - checked_call['hostid'] {}
> 2017-11-29 11:14:15,494 - checked_call returned (0, '16ace057')
> 2017-11-29 11:14:15,495 - Execute[' echo 'snapshot_all' | 
> /usr/iop/current/hbase-client/bin/hbase shell'] {'user': 'hbase'}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-13 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko resolved AMBARI-22387.
-
Resolution: Fixed

[~adoroszlai], 
Thank you, I missed this import. Fixed

remote: ambari git commit: AMBARI-22387. Create a Pre-Upgrade Check Warning 
About LZO. Fix import (dlysnichenko)
remote: ambari git commit: AMBARI-22387. Create a Pre-Upgrade Check Warning 
About LZO. Fix import (dlysnichenko)
To https://git-wip-us.apache.org/repos/asf/ambari.git
   3153e2d1b3..36f2ad4e19  branch-2.6 -> branch-2.6
   76349ac20d..6e706d427f  trunk -> trunk


> Create a Pre-Upgrade Check Warning About LZO
> 
>
> Key: AMBARI-22387
> URL: https://issues.apache.org/jira/browse/AMBARI-22387
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22387.patch
>
>
> Ambari has removed its native support of distributing and installing LZO when 
> the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO 
> is enabled, this means that performing an upgrade will now require manual 
> user intervention to get the LZO packages installed.
> A pre-upgrade check should be created which checks to see if LZO is enabled 
> in the cluster and then produces a {{WARNING}} to the user letting them know 
> that before upgrading, they'd need to distribute the appropriate LZO packages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22469) Ambari upgrade failed

2017-11-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22469:

Status: Patch Available  (was: Open)

> Ambari upgrade failed
> -
>
> Key: AMBARI-22469
> URL: https://issues.apache.org/jira/browse/AMBARI-22469
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: amarnath reddy pappu
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22469.patch
>
>
> Ambari upgrade would fail for all Ambari view servers.
> Steps to reproduce:
> 1. Install Ambari 2.5.2 and setup it as view server. (it you don't set up it 
> up as view server also it fails)
> 2. now install 2.6.0
> 3. run ambari-server upgrade
> it fails out with below exception.
> {noformat}
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> Unable to find any CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.apache.ambari.server.AmbariException: Unable to find any 
> CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:510)
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194)
>   at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
>   ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {noformat}
> For some reason we are checking cluster_version table entries and throwing up 
> above exception.
> {noformat}
> In UpgradeCatalog260.java
> public int getCurrentVersionID() throws AmbariException, SQLException {
> List currentVersionList = 
> dbAccessor.getIntColumnValues(CLUSTER_VERSION_TABLE, REPO_VERSION_ID_COLUMN,
> new String[]{STATE_COLUMN}, new String[]{CURRENT}, false);
> if (currentVersionList.isEmpty()) {
>   throw new AmbariException("Unable to find any CURRENT repositories.");
> } else if (currentVersionList.size() != 1) {
>   throw new AmbariException("The following repositories were found to be 
> CURRENT: ".concat(StringUtils.join(currentVersionList, ",")));
> }
> return currentVersionList.get(0);
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22469) Ambari upgrade failed

2017-11-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22469:

Fix Version/s: 2.6.1

> Ambari upgrade failed
> -
>
> Key: AMBARI-22469
> URL: https://issues.apache.org/jira/browse/AMBARI-22469
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: amarnath reddy pappu
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
>
> Ambari upgrade would fail for all Ambari view servers.
> Steps to reproduce:
> 1. Install Ambari 2.5.2 and setup it as view server. (it you don't set up it 
> up as view server also it fails)
> 2. now install 2.6.0
> 3. run ambari-server upgrade
> it fails out with below exception.
> {noformat}
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> Unable to find any CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.apache.ambari.server.AmbariException: Unable to find any 
> CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:510)
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194)
>   at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
>   ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {noformat}
> For some reason we are checking cluster_version table entries and throwing up 
> above exception.
> {noformat}
> In UpgradeCatalog260.java
> public int getCurrentVersionID() throws AmbariException, SQLException {
> List currentVersionList = 
> dbAccessor.getIntColumnValues(CLUSTER_VERSION_TABLE, REPO_VERSION_ID_COLUMN,
> new String[]{STATE_COLUMN}, new String[]{CURRENT}, false);
> if (currentVersionList.isEmpty()) {
>   throw new AmbariException("Unable to find any CURRENT repositories.");
> } else if (currentVersionList.size() != 1) {
>   throw new AmbariException("The following repositories were found to be 
> CURRENT: ".concat(StringUtils.join(currentVersionList, ",")));
> }
> return currentVersionList.get(0);
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (AMBARI-22469) Ambari upgrade failed

2017-11-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko reassigned AMBARI-22469:
---

Assignee: Dmitry Lysnichenko

> Ambari upgrade failed
> -
>
> Key: AMBARI-22469
> URL: https://issues.apache.org/jira/browse/AMBARI-22469
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: amarnath reddy pappu
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
>
> Ambari upgrade would fail for all Ambari view servers.
> Steps to reproduce:
> 1. Install Ambari 2.5.2 and setup it as view server. (it you don't set up it 
> up as view server also it fails)
> 2. now install 2.6.0
> 3. run ambari-server upgrade
> it fails out with below exception.
> {noformat}
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> Unable to find any CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.apache.ambari.server.AmbariException: Unable to find any 
> CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:510)
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194)
>   at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
>   ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {noformat}
> For some reason we are checking cluster_version table entries and throwing up 
> above exception.
> {noformat}
> In UpgradeCatalog260.java
> public int getCurrentVersionID() throws AmbariException, SQLException {
> List currentVersionList = 
> dbAccessor.getIntColumnValues(CLUSTER_VERSION_TABLE, REPO_VERSION_ID_COLUMN,
> new String[]{STATE_COLUMN}, new String[]{CURRENT}, false);
> if (currentVersionList.isEmpty()) {
>   throw new AmbariException("Unable to find any CURRENT repositories.");
> } else if (currentVersionList.size() != 1) {
>   throw new AmbariException("The following repositories were found to be 
> CURRENT: ".concat(StringUtils.join(currentVersionList, ",")));
> }
> return currentVersionList.get(0);
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22469) Ambari upgrade failed

2017-11-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22469:

Attachment: AMBARI-22469.patch

> Ambari upgrade failed
> -
>
> Key: AMBARI-22469
> URL: https://issues.apache.org/jira/browse/AMBARI-22469
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0
>Reporter: amarnath reddy pappu
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22469.patch
>
>
> Ambari upgrade would fail for all Ambari view servers.
> Steps to reproduce:
> 1. Install Ambari 2.5.2 and setup it as view server. (it you don't set up it 
> up as view server also it fails)
> 2. now install 2.6.0
> 3. run ambari-server upgrade
> it fails out with below exception.
> {noformat}
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Error output from schema upgrade command:
> ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: 
> Unable to find any CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418)
> Caused by: org.apache.ambari.server.AmbariException: Unable to find any 
> CURRENT repositories.
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:510)
>   at 
> org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194)
>   at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923)
>   at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200)
>   ... 1 more
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11.
> REASON: Schema upgrade failed.
> {noformat}
> For some reason we are checking cluster_version table entries and throwing up 
> above exception.
> {noformat}
> In UpgradeCatalog260.java
> public int getCurrentVersionID() throws AmbariException, SQLException {
> List currentVersionList = 
> dbAccessor.getIntColumnValues(CLUSTER_VERSION_TABLE, REPO_VERSION_ID_COLUMN,
> new String[]{STATE_COLUMN}, new String[]{CURRENT}, false);
> if (currentVersionList.isEmpty()) {
>   throw new AmbariException("Unable to find any CURRENT repositories.");
> } else if (currentVersionList.size() != 1) {
>   throw new AmbariException("The following repositories were found to be 
> CURRENT: ".concat(StringUtils.join(currentVersionList, ",")));
> }
> return currentVersionList.get(0);
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22353) Remove properties.json And Switch To Adding Properties to ResourceProviders Dynamically

2017-11-02 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22353:

Affects Version/s: 3.0.0

> Remove properties.json And Switch To Adding Properties to ResourceProviders 
> Dynamically
> ---
>
> Key: AMBARI-22353
> URL: https://issues.apache.org/jira/browse/AMBARI-22353
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: AMBARI-22353.patch
>
>
> Legacy/ancient ResourceProviders use the {{properties.json}} file to govern 
> which properties can be used with the provider. This seems like excessive 
> decoupling without any benefit and usually leads to runtime errors when new 
> or removed properties are forgotten.
> This file should be removed and the providers should be registering the known 
> properties on their own.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22353) Remove properties.json And Switch To Adding Properties to ResourceProviders Dynamically

2017-11-02 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22353:

Fix Version/s: 3.0.0

> Remove properties.json And Switch To Adding Properties to ResourceProviders 
> Dynamically
> ---
>
> Key: AMBARI-22353
> URL: https://issues.apache.org/jira/browse/AMBARI-22353
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: AMBARI-22353.patch
>
>
> Legacy/ancient ResourceProviders use the {{properties.json}} file to govern 
> which properties can be used with the provider. This seems like excessive 
> decoupling without any benefit and usually leads to runtime errors when new 
> or removed properties are forgotten.
> This file should be removed and the providers should be registering the known 
> properties on their own.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22353) Remove properties.json And Switch To Adding Properties to ResourceProviders Dynamically

2017-11-02 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22353:

Component/s: ambari-server

> Remove properties.json And Switch To Adding Properties to ResourceProviders 
> Dynamically
> ---
>
> Key: AMBARI-22353
> URL: https://issues.apache.org/jira/browse/AMBARI-22353
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Major
> Attachments: AMBARI-22353.patch
>
>
> Legacy/ancient ResourceProviders use the {{properties.json}} file to govern 
> which properties can be used with the provider. This seems like excessive 
> decoupling without any benefit and usually leads to runtime errors when new 
> or removed properties are forgotten.
> This file should be removed and the providers should be registering the known 
> properties on their own.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22353) Remove properties.json And Switch To Adding Properties to ResourceProviders Dynamically

2017-11-02 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22353:
---

 Summary: Remove properties.json And Switch To Adding Properties to 
ResourceProviders Dynamically
 Key: AMBARI-22353
 URL: https://issues.apache.org/jira/browse/AMBARI-22353
 Project: Ambari
  Issue Type: Task
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Major



Legacy/ancient ResourceProviders use the {{properties.json}} file to govern 
which properties can be used with the provider. This seems like excessive 
decoupling without any benefit and usually leads to runtime errors when new or 
removed properties are forgotten.

This file should be removed and the providers should be registering the known 
properties on their own.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22353) Remove properties.json And Switch To Adding Properties to ResourceProviders Dynamically

2017-11-02 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22353:

Status: Patch Available  (was: Open)

> Remove properties.json And Switch To Adding Properties to ResourceProviders 
> Dynamically
> ---
>
> Key: AMBARI-22353
> URL: https://issues.apache.org/jira/browse/AMBARI-22353
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Major
> Attachments: AMBARI-22353.patch
>
>
> Legacy/ancient ResourceProviders use the {{properties.json}} file to govern 
> which properties can be used with the provider. This seems like excessive 
> decoupling without any benefit and usually leads to runtime errors when new 
> or removed properties are forgotten.
> This file should be removed and the providers should be registering the known 
> properties on their own.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22353) Remove properties.json And Switch To Adding Properties to ResourceProviders Dynamically

2017-11-02 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22353:

Attachment: AMBARI-22353.patch

> Remove properties.json And Switch To Adding Properties to ResourceProviders 
> Dynamically
> ---
>
> Key: AMBARI-22353
> URL: https://issues.apache.org/jira/browse/AMBARI-22353
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Major
> Attachments: AMBARI-22353.patch
>
>
> Legacy/ancient ResourceProviders use the {{properties.json}} file to govern 
> which properties can be used with the provider. This seems like excessive 
> decoupling without any benefit and usually leads to runtime errors when new 
> or removed properties are forgotten.
> This file should be removed and the providers should be registering the known 
> properties on their own.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-09 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22387:

Attachment: AMBARI-22387.patch

> Create a Pre-Upgrade Check Warning About LZO
> 
>
> Key: AMBARI-22387
> URL: https://issues.apache.org/jira/browse/AMBARI-22387
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22387.patch
>
>
> Ambari has removed its native support of distributing and installing LZO when 
> the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO 
> is enabled, this means that performing an upgrade will now require manual 
> user intervention to get the LZO packages installed.
> A pre-upgrade check should be created which checks to see if LZO is enabled 
> in the cluster and then produces a {{WARNING}} to the user letting them know 
> that before upgrading, they'd need to distribute the appropriate LZO packages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-09 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22387:

Status: Patch Available  (was: Open)

> Create a Pre-Upgrade Check Warning About LZO
> 
>
> Key: AMBARI-22387
> URL: https://issues.apache.org/jira/browse/AMBARI-22387
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22387.patch
>
>
> Ambari has removed its native support of distributing and installing LZO when 
> the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO 
> is enabled, this means that performing an upgrade will now require manual 
> user intervention to get the LZO packages installed.
> A pre-upgrade check should be created which checks to see if LZO is enabled 
> in the cluster and then produces a {{WARNING}} to the user letting them know 
> that before upgrading, they'd need to distribute the appropriate LZO packages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-09 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22387:
---

 Summary: Create a Pre-Upgrade Check Warning About LZO
 Key: AMBARI-22387
 URL: https://issues.apache.org/jira/browse/AMBARI-22387
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Blocker



Ambari has removed its native support of distributing and installing LZO when 
the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO is 
enabled, this means that performing an upgrade will now require manual user 
intervention to get the LZO packages installed.

A pre-upgrade check should be created which checks to see if LZO is enabled in 
the cluster and then produces a {{WARNING}} to the user letting them know that 
before upgrading, they'd need to distribute the appropriate LZO packages.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22387) Create a Pre-Upgrade Check Warning About LZO

2017-11-09 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22387:

Component/s: ambari-server

> Create a Pre-Upgrade Check Warning About LZO
> 
>
> Key: AMBARI-22387
> URL: https://issues.apache.org/jira/browse/AMBARI-22387
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>
> Ambari has removed its native support of distributing and installing LZO when 
> the LZO codecs are enabled in {{core-site}}. For existing clusters where LZO 
> is enabled, this means that performing an upgrade will now require manual 
> user intervention to get the LZO packages installed.
> A pre-upgrade check should be created which checks to see if LZO is enabled 
> in the cluster and then produces a {{WARNING}} to the user letting them know 
> that before upgrading, they'd need to distribute the appropriate LZO packages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22361) NameNode Web UI alert raised due to mixed cases in hostname

2017-11-08 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22361:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed 

remote: ambari git commit: AMBARI-22361. Fix bug in base_alert when matching 
hostnames (stephanesan via dlysnichenko)
To https://git-wip-us.apache.org/repos/asf/ambari.git
   30a43c9f3e..0f67d1c690  trunk -> trunk


> NameNode Web UI alert raised due to mixed cases in hostname
> ---
>
> Key: AMBARI-22361
> URL: https://issues.apache.org/jira/browse/AMBARI-22361
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.6.0
>Reporter: stephanesan
>Assignee: Dmitry Lysnichenko
> Fix For: 3.0.0
>
> Attachments: AMBARI-22361_branch-2.6.patch
>
>
> Error at hand:
> Connection failed to http://vz-sl-upupup-8724-hadoop-mgr-1:0 ( [Errno 111] Connection refused>)
> Explanation:
> In the command json file the hostname is in lower case, while in HA mode the
> hdfs-site property dfs.namenode.http-address.{{ha-nameservice}}.{{nn_id}}
> may
> have upper case parts, which prevents matching the hostname.
> Proposal:
> Apply similar patch as done for 
> https://issues.apache.org/jira/browse/AMBARI-19282



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (AMBARI-22361) NameNode Web UI alert raised due to mixed cases in hostname

2017-11-08 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko reassigned AMBARI-22361:
---

Assignee: Dmitry Lysnichenko

> NameNode Web UI alert raised due to mixed cases in hostname
> ---
>
> Key: AMBARI-22361
> URL: https://issues.apache.org/jira/browse/AMBARI-22361
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.6.0
>Reporter: stephanesan
>Assignee: Dmitry Lysnichenko
> Fix For: 3.0.0
>
> Attachments: AMBARI-22361_branch-2.6.patch
>
>
> Error at hand:
> Connection failed to http://vz-sl-upupup-8724-hadoop-mgr-1:0 ( [Errno 111] Connection refused>)
> Explanation:
> In the command json file the hostname is in lower case, while in HA mode the
> hdfs-site property dfs.namenode.http-address.{{ha-nameservice}}.{{nn_id}}
> may
> have upper case parts, which prevents matching the hostname.
> Proposal:
> Apply similar patch as done for 
> https://issues.apache.org/jira/browse/AMBARI-19282



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22594) Livy server start fails during EU with 'Address already in use' error

2017-12-05 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22594:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed
To https://git-wip-us.apache.org/repos/asf/ambari.git
   12c5e9c524..e75e743fe1  branch-2.6 -> branch-2.6
   620543c6c2..86a99f2026  trunk -> trunk

> Livy server start fails during EU with 'Address already in use' error
> -
>
> Key: AMBARI-22594
> URL: https://issues.apache.org/jira/browse/AMBARI-22594
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22594.patch
>
>
> Observed this issue quite consistently in Ambari-2.6.1 Upgrade ST runs
> *STR*
> # Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 
> 2.6.1.0-129
> # Upgrade Ambari to Target Version: 2.6.1.0-43 | Hash: 
> acbce28fdd119c72625c6beff63fc169de58ba22
> # Regenerate keytabs post Ambari upgrade and this step will restart all 
> services. Here Livy server is operational and gets restarted fine (at 
> timestamp: 09:29)
> # Now register HDP-2.6.4.0-36 version and perform EU. During EU 'Restart Livy 
> server' task happens and reports success (at timestamp: 10:26)
> # However when checking the livy logs - Livy restart reported below exception 
> as the previous process was not killed/stopped
> {code}
> 17/11/21 10:26:22 WARN AbstractLifeCycle: FAILED 
> org.eclipse.jetty.server.Server@3bc735b3: java.net.BindException: Address 
> already in use
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
> at org.apache.livy.server.LivyServer.main(LivyServer.scala)
> Exception in thread "main" java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> {code}
> - Post Upgrade, I tried to stop/start Spark as well and Livy still gave same 
> exception; although web Ui reports operation as success (at timestamp: 11:37)
> - Finally the web UI shows Livy as down, even though the process is running 
> from the initial step (at timestamp: 09:29)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22594) Livy server start fails during EU with 'Address already in use' error

2017-12-05 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22594:

Fix Version/s: 2.6.1

> Livy server start fails during EU with 'Address already in use' error
> -
>
> Key: AMBARI-22594
> URL: https://issues.apache.org/jira/browse/AMBARI-22594
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22594.patch
>
>
> Observed this issue quite consistently in Ambari-2.6.1 Upgrade ST runs
> *STR*
> # Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 
> 2.6.1.0-129
> # Upgrade Ambari to Target Version: 2.6.1.0-43 | Hash: 
> acbce28fdd119c72625c6beff63fc169de58ba22
> # Regenerate keytabs post Ambari upgrade and this step will restart all 
> services. Here Livy server is operational and gets restarted fine (at 
> timestamp: 09:29)
> # Now register HDP-2.6.4.0-36 version and perform EU. During EU 'Restart Livy 
> server' task happens and reports success (at timestamp: 10:26)
> # However when checking the livy logs - Livy restart reported below exception 
> as the previous process was not killed/stopped
> {code}
> 17/11/21 10:26:22 WARN AbstractLifeCycle: FAILED 
> org.eclipse.jetty.server.Server@3bc735b3: java.net.BindException: Address 
> already in use
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
> at org.apache.livy.server.LivyServer.main(LivyServer.scala)
> Exception in thread "main" java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> {code}
> - Post Upgrade, I tried to stop/start Spark as well and Livy still gave same 
> exception; although web Ui reports operation as success (at timestamp: 11:37)
> - Finally the web UI shows Livy as down, even though the process is running 
> from the initial step (at timestamp: 09:29)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22594) Livy server start fails during EU with 'Address already in use' error

2017-12-05 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22594:

Affects Version/s: 2.6.1

> Livy server start fails during EU with 'Address already in use' error
> -
>
> Key: AMBARI-22594
> URL: https://issues.apache.org/jira/browse/AMBARI-22594
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.1
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.1
>
> Attachments: AMBARI-22594.patch
>
>
> Observed this issue quite consistently in Ambari-2.6.1 Upgrade ST runs
> *STR*
> # Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 
> 2.6.1.0-129
> # Upgrade Ambari to Target Version: 2.6.1.0-43 | Hash: 
> acbce28fdd119c72625c6beff63fc169de58ba22
> # Regenerate keytabs post Ambari upgrade and this step will restart all 
> services. Here Livy server is operational and gets restarted fine (at 
> timestamp: 09:29)
> # Now register HDP-2.6.4.0-36 version and perform EU. During EU 'Restart Livy 
> server' task happens and reports success (at timestamp: 10:26)
> # However when checking the livy logs - Livy restart reported below exception 
> as the previous process was not killed/stopped
> {code}
> 17/11/21 10:26:22 WARN AbstractLifeCycle: FAILED 
> org.eclipse.jetty.server.Server@3bc735b3: java.net.BindException: Address 
> already in use
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
> at org.apache.livy.server.LivyServer.main(LivyServer.scala)
> Exception in thread "main" java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> {code}
> - Post Upgrade, I tried to stop/start Spark as well and Livy still gave same 
> exception; although web Ui reports operation as success (at timestamp: 11:37)
> - Finally the web UI shows Livy as down, even though the process is running 
> from the initial step (at timestamp: 09:29)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22594) Livy server start fails during EU with 'Address already in use' error

2017-12-05 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22594:

Attachment: AMBARI-22594.patch

> Livy server start fails during EU with 'Address already in use' error
> -
>
> Key: AMBARI-22594
> URL: https://issues.apache.org/jira/browse/AMBARI-22594
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22594.patch
>
>
> Observed this issue quite consistently in Ambari-2.6.1 Upgrade ST runs
> *STR*
> # Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 
> 2.6.1.0-129
> # Upgrade Ambari to Target Version: 2.6.1.0-43 | Hash: 
> acbce28fdd119c72625c6beff63fc169de58ba22
> # Regenerate keytabs post Ambari upgrade and this step will restart all 
> services. Here Livy server is operational and gets restarted fine (at 
> timestamp: 09:29)
> # Now register HDP-2.6.4.0-36 version and perform EU. During EU 'Restart Livy 
> server' task happens and reports success (at timestamp: 10:26)
> # However when checking the livy logs - Livy restart reported below exception 
> as the previous process was not killed/stopped
> {code}
> 17/11/21 10:26:22 WARN AbstractLifeCycle: FAILED 
> org.eclipse.jetty.server.Server@3bc735b3: java.net.BindException: Address 
> already in use
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
> at org.apache.livy.server.LivyServer.main(LivyServer.scala)
> Exception in thread "main" java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> {code}
> - Post Upgrade, I tried to stop/start Spark as well and Livy still gave same 
> exception; although web Ui reports operation as success (at timestamp: 11:37)
> - Finally the web UI shows Livy as down, even though the process is running 
> from the initial step (at timestamp: 09:29)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22594) Livy server start fails during EU with 'Address already in use' error

2017-12-05 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22594:

Component/s: ambari-server

> Livy server start fails during EU with 'Address already in use' error
> -
>
> Key: AMBARI-22594
> URL: https://issues.apache.org/jira/browse/AMBARI-22594
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22594.patch
>
>
> Observed this issue quite consistently in Ambari-2.6.1 Upgrade ST runs
> *STR*
> # Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 
> 2.6.1.0-129
> # Upgrade Ambari to Target Version: 2.6.1.0-43 | Hash: 
> acbce28fdd119c72625c6beff63fc169de58ba22
> # Regenerate keytabs post Ambari upgrade and this step will restart all 
> services. Here Livy server is operational and gets restarted fine (at 
> timestamp: 09:29)
> # Now register HDP-2.6.4.0-36 version and perform EU. During EU 'Restart Livy 
> server' task happens and reports success (at timestamp: 10:26)
> # However when checking the livy logs - Livy restart reported below exception 
> as the previous process was not killed/stopped
> {code}
> 17/11/21 10:26:22 WARN AbstractLifeCycle: FAILED 
> org.eclipse.jetty.server.Server@3bc735b3: java.net.BindException: Address 
> already in use
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
> at org.apache.livy.server.LivyServer.main(LivyServer.scala)
> Exception in thread "main" java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> {code}
> - Post Upgrade, I tried to stop/start Spark as well and Livy still gave same 
> exception; although web Ui reports operation as success (at timestamp: 11:37)
> - Finally the web UI shows Livy as down, even though the process is running 
> from the initial step (at timestamp: 09:29)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22594) Livy server start fails during EU with 'Address already in use' error

2017-12-05 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22594:
---

 Summary: Livy server start fails during EU with 'Address already 
in use' error
 Key: AMBARI-22594
 URL: https://issues.apache.org/jira/browse/AMBARI-22594
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Blocker



Observed this issue quite consistently in Ambari-2.6.1 Upgrade ST runs

*STR*
# Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 2.6.1.0-129
# Upgrade Ambari to Target Version: 2.6.1.0-43 | Hash: 
acbce28fdd119c72625c6beff63fc169de58ba22
# Regenerate keytabs post Ambari upgrade and this step will restart all 
services. Here Livy server is operational and gets restarted fine (at 
timestamp: 09:29)
# Now register HDP-2.6.4.0-36 version and perform EU. During EU 'Restart Livy 
server' task happens and reports success (at timestamp: 10:26)
# However when checking the livy logs - Livy restart reported below exception 
as the previous process was not killed/stopped
{code}
17/11/21 10:26:22 WARN AbstractLifeCycle: FAILED 
org.eclipse.jetty.server.Server@3bc735b3: java.net.BindException: Address 
already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
at org.apache.livy.server.LivyServer.main(LivyServer.scala)
Exception in thread "main" java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
{code}

- Post Upgrade, I tried to stop/start Spark as well and Livy still gave same 
exception; although web Ui reports operation as success (at timestamp: 11:37)
- Finally the web UI shows Livy as down, even though the process is running 
from the initial step (at timestamp: 09:29)





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22594) Livy server start fails during EU with 'Address already in use' error

2017-12-05 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22594:

Status: Patch Available  (was: Open)

> Livy server start fails during EU with 'Address already in use' error
> -
>
> Key: AMBARI-22594
> URL: https://issues.apache.org/jira/browse/AMBARI-22594
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22594.patch
>
>
> Observed this issue quite consistently in Ambari-2.6.1 Upgrade ST runs
> *STR*
> # Deployed cluster with Ambari version: 2.5.1.0-159 and HDP version: 
> 2.6.1.0-129
> # Upgrade Ambari to Target Version: 2.6.1.0-43 | Hash: 
> acbce28fdd119c72625c6beff63fc169de58ba22
> # Regenerate keytabs post Ambari upgrade and this step will restart all 
> services. Here Livy server is operational and gets restarted fine (at 
> timestamp: 09:29)
> # Now register HDP-2.6.4.0-36 version and perform EU. During EU 'Restart Livy 
> server' task happens and reports success (at timestamp: 10:26)
> # However when checking the livy logs - Livy restart reported below exception 
> as the previous process was not killed/stopped
> {code}
> 17/11/21 10:26:22 WARN AbstractLifeCycle: FAILED 
> org.eclipse.jetty.server.Server@3bc735b3: java.net.BindException: Address 
> already in use
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
> at org.apache.livy.server.LivyServer.main(LivyServer.scala)
> Exception in thread "main" java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> {code}
> - Post Upgrade, I tried to stop/start Spark as well and Livy still gave same 
> exception; although web Ui reports operation as success (at timestamp: 11:37)
> - Finally the web UI shows Livy as down, even though the process is running 
> from the initial step (at timestamp: 09:29)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22353) Remove properties.json And Switch To Adding Properties to ResourceProviders Dynamically

2017-12-04 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22353:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed
To https://git-wip-us.apache.org/repos/asf/ambari.git
   24c64b44d9..e77a31ab0a  trunk -> trunk


> Remove properties.json And Switch To Adding Properties to ResourceProviders 
> Dynamically
> ---
>
> Key: AMBARI-22353
> URL: https://issues.apache.org/jira/browse/AMBARI-22353
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 3.0.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 3.0.0
>
> Attachments: AMBARI-22353.patch
>
>
> Legacy/ancient ResourceProviders use the {{properties.json}} file to govern 
> which properties can be used with the provider. This seems like excessive 
> decoupling without any benefit and usually leads to runtime errors when new 
> or removed properties are forgotten.
> This file should be removed and the providers should be registering the known 
> properties on their own.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22678) Fix Broken Symlinks on Stack Distribution

2017-12-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22678:

Fix Version/s: 2.6.2

> Fix Broken Symlinks on Stack Distribution
> -
>
> Key: AMBARI-22678
> URL: https://issues.apache.org/jira/browse/AMBARI-22678
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.2
>
> Attachments: AMBARI-22678.patch
>
>
> There are two scenarios to cover here:
> # Ambari never conf-select'd a component (maybe because of a bug or because 
> the component didn't support it)
> # The conf pointers of a component are broken
> In either event, when distributing a new stack, the code detects this problem 
> (as it would on a first-time install) and tries to fix it:
> {code}
> /etc/component/conf (directory)
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> {code}
> The stack distribution thinks this is a first-time installed and tries to fix 
> the symlinks. We end up with:
> {code}
> /etc/component/conf -> /usr/hdp/current/component
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> /usr/hdp/v2/component -> /etc/component/v2/0
> {code}
> Because we're only conf-selecting v2, v1 never gets corrected since it's 
> already installed. Thus, we have a circular symlink.
> Most likely the proper fix will be:
> - Iterate over the entire known conf-select structure
> - Check to see the state /etc/component/conf - if it's bad, fix it to defaults
> Chances are we can do this directly in 
> {{conf_select.convert_conf_directories_to_symlinks}}:
> {code}
> stack_name = Script.get_stack_name()
> for directory_struct in dirs:
> if not os.path.exists(directory_struct['conf_dir']):
> Logger.info("Skipping the conf-select tool on {0} since {1} does not 
> exist.".format(
> package, directory_struct['conf_dir']))
> return
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22678) Fix Broken Symlinks on Stack Distribution

2017-12-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22678:

Affects Version/s: 2.6.2

> Fix Broken Symlinks on Stack Distribution
> -
>
> Key: AMBARI-22678
> URL: https://issues.apache.org/jira/browse/AMBARI-22678
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.6.2
>
> Attachments: AMBARI-22678.patch
>
>
> There are two scenarios to cover here:
> # Ambari never conf-select'd a component (maybe because of a bug or because 
> the component didn't support it)
> # The conf pointers of a component are broken
> In either event, when distributing a new stack, the code detects this problem 
> (as it would on a first-time install) and tries to fix it:
> {code}
> /etc/component/conf (directory)
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> {code}
> The stack distribution thinks this is a first-time installed and tries to fix 
> the symlinks. We end up with:
> {code}
> /etc/component/conf -> /usr/hdp/current/component
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> /usr/hdp/v2/component -> /etc/component/v2/0
> {code}
> Because we're only conf-selecting v2, v1 never gets corrected since it's 
> already installed. Thus, we have a circular symlink.
> Most likely the proper fix will be:
> - Iterate over the entire known conf-select structure
> - Check to see the state /etc/component/conf - if it's bad, fix it to defaults
> Chances are we can do this directly in 
> {{conf_select.convert_conf_directories_to_symlinks}}:
> {code}
> stack_name = Script.get_stack_name()
> for directory_struct in dirs:
> if not os.path.exists(directory_struct['conf_dir']):
> Logger.info("Skipping the conf-select tool on {0} since {1} does not 
> exist.".format(
> package, directory_struct['conf_dir']))
> return
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22678) Fix Broken Symlinks on Stack Distribution

2017-12-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22678:

Attachment: AMBARI-22678.patch

> Fix Broken Symlinks on Stack Distribution
> -
>
> Key: AMBARI-22678
> URL: https://issues.apache.org/jira/browse/AMBARI-22678
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22678.patch
>
>
> There are two scenarios to cover here:
> # Ambari never conf-select'd a component (maybe because of a bug or because 
> the component didn't support it)
> # The conf pointers of a component are broken
> In either event, when distributing a new stack, the code detects this problem 
> (as it would on a first-time install) and tries to fix it:
> {code}
> /etc/component/conf (directory)
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> {code}
> The stack distribution thinks this is a first-time installed and tries to fix 
> the symlinks. We end up with:
> {code}
> /etc/component/conf -> /usr/hdp/current/component
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> /usr/hdp/v2/component -> /etc/component/v2/0
> {code}
> Because we're only conf-selecting v2, v1 never gets corrected since it's 
> already installed. Thus, we have a circular symlink.
> Most likely the proper fix will be:
> - Iterate over the entire known conf-select structure
> - Check to see the state /etc/component/conf - if it's bad, fix it to defaults
> Chances are we can do this directly in 
> {{conf_select.convert_conf_directories_to_symlinks}}:
> {code}
> stack_name = Script.get_stack_name()
> for directory_struct in dirs:
> if not os.path.exists(directory_struct['conf_dir']):
> Logger.info("Skipping the conf-select tool on {0} since {1} does not 
> exist.".format(
> package, directory_struct['conf_dir']))
> return
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22678) Fix Broken Symlinks on Stack Distribution

2017-12-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22678:

Component/s: ambari-server

> Fix Broken Symlinks on Stack Distribution
> -
>
> Key: AMBARI-22678
> URL: https://issues.apache.org/jira/browse/AMBARI-22678
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22678.patch
>
>
> There are two scenarios to cover here:
> # Ambari never conf-select'd a component (maybe because of a bug or because 
> the component didn't support it)
> # The conf pointers of a component are broken
> In either event, when distributing a new stack, the code detects this problem 
> (as it would on a first-time install) and tries to fix it:
> {code}
> /etc/component/conf (directory)
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> {code}
> The stack distribution thinks this is a first-time installed and tries to fix 
> the symlinks. We end up with:
> {code}
> /etc/component/conf -> /usr/hdp/current/component
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> /usr/hdp/v2/component -> /etc/component/v2/0
> {code}
> Because we're only conf-selecting v2, v1 never gets corrected since it's 
> already installed. Thus, we have a circular symlink.
> Most likely the proper fix will be:
> - Iterate over the entire known conf-select structure
> - Check to see the state /etc/component/conf - if it's bad, fix it to defaults
> Chances are we can do this directly in 
> {{conf_select.convert_conf_directories_to_symlinks}}:
> {code}
> stack_name = Script.get_stack_name()
> for directory_struct in dirs:
> if not os.path.exists(directory_struct['conf_dir']):
> Logger.info("Skipping the conf-select tool on {0} since {1} does not 
> exist.".format(
> package, directory_struct['conf_dir']))
> return
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22678) Fix Broken Symlinks on Stack Distribution

2017-12-20 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-22678:

Status: Patch Available  (was: Open)

> Fix Broken Symlinks on Stack Distribution
> -
>
> Key: AMBARI-22678
> URL: https://issues.apache.org/jira/browse/AMBARI-22678
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Attachments: AMBARI-22678.patch
>
>
> There are two scenarios to cover here:
> # Ambari never conf-select'd a component (maybe because of a bug or because 
> the component didn't support it)
> # The conf pointers of a component are broken
> In either event, when distributing a new stack, the code detects this problem 
> (as it would on a first-time install) and tries to fix it:
> {code}
> /etc/component/conf (directory)
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> {code}
> The stack distribution thinks this is a first-time installed and tries to fix 
> the symlinks. We end up with:
> {code}
> /etc/component/conf -> /usr/hdp/current/component
> /usr/hdp/current/component -> /usr/hdp/v1/component
> /usr/hdp/v1/component -> /etc/component/conf
> /usr/hdp/v2/component -> /etc/component/v2/0
> {code}
> Because we're only conf-selecting v2, v1 never gets corrected since it's 
> already installed. Thus, we have a circular symlink.
> Most likely the proper fix will be:
> - Iterate over the entire known conf-select structure
> - Check to see the state /etc/component/conf - if it's bad, fix it to defaults
> Chances are we can do this directly in 
> {{conf_select.convert_conf_directories_to_symlinks}}:
> {code}
> stack_name = Script.get_stack_name()
> for directory_struct in dirs:
> if not os.path.exists(directory_struct['conf_dir']):
> Logger.info("Skipping the conf-select tool on {0} since {1} does not 
> exist.".format(
> package, directory_struct['conf_dir']))
> return
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22678) Fix Broken Symlinks on Stack Distribution

2017-12-20 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-22678:
---

 Summary: Fix Broken Symlinks on Stack Distribution
 Key: AMBARI-22678
 URL: https://issues.apache.org/jira/browse/AMBARI-22678
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
Priority: Blocker



There are two scenarios to cover here:

# Ambari never conf-select'd a component (maybe because of a bug or because the 
component didn't support it)
# The conf pointers of a component are broken

In either event, when distributing a new stack, the code detects this problem 
(as it would on a first-time install) and tries to fix it:
{code}
/etc/component/conf (directory)
/usr/hdp/current/component -> /usr/hdp/v1/component
/usr/hdp/v1/component -> /etc/component/conf
{code}

The stack distribution thinks this is a first-time installed and tries to fix 
the symlinks. We end up with:
{code}
/etc/component/conf -> /usr/hdp/current/component
/usr/hdp/current/component -> /usr/hdp/v1/component
/usr/hdp/v1/component -> /etc/component/conf
/usr/hdp/v2/component -> /etc/component/v2/0
{code}

Because we're only conf-selecting v2, v1 never gets corrected since it's 
already installed. Thus, we have a circular symlink.

Most likely the proper fix will be:
- Iterate over the entire known conf-select structure
- Check to see the state /etc/component/conf - if it's bad, fix it to defaults

Chances are we can do this directly in 
{{conf_select.convert_conf_directories_to_symlinks}}:
{code}
stack_name = Script.get_stack_name()
for directory_struct in dirs:
if not os.path.exists(directory_struct['conf_dir']):
Logger.info("Skipping the conf-select tool on {0} since {1} does not 
exist.".format(
package, directory_struct['conf_dir']))

return
{code}






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-23893) Using Configs.py throws

2018-05-18 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-23893:

Fix Version/s: 2.7.0

> Using Configs.py throws  certificate verify failed (_ssl.c:579) error
> 
>
> Key: AMBARI-23893
> URL: https://issues.apache.org/jira/browse/AMBARI-23893
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> {code}
> [root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 
> 8443 --protocol https --action get --host localhost --cluster cl1 
> --config-type hive-site --key hive.exec.post.hooks
> 2018-05-16 13:23:06,375 INFO ### Performing "get" content:
> Traceback (most recent call last):
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 
> 
> sys.exit(main())
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
> return get_properties(cluster, config_type, action_args, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
> get_properties
> get_config(cluster, config_type, accessor, output)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
> get_config
> properties, attributes = get_current_config(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
> get_current_config
> config_tag = get_config_tag(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
> get_config_tag
> response = accessor(DESIRED_CONFIGS_URL.format(cluster))
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 90, in 
> do_request
> raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
> Exception: Problem with accessing api. Reason:  CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-23893) Using Configs.py throws

2018-05-18 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-23893:
---

 Summary: Using Configs.py throws https://issues.apache.org/jira/browse/AMBARI-23893
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko




{code}
[root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 8443 
--protocol https --action get --host localhost --cluster cl1 --config-type 
hive-site --key hive.exec.post.hooks
2018-05-16 13:23:06,375 INFO ### Performing "get" content:
Traceback (most recent call last):
File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 

sys.exit(main())
File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
return get_properties(cluster, config_type, action_args, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
get_properties
get_config(cluster, config_type, accessor, output)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
get_config
properties, attributes = get_current_config(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
get_current_config
config_tag = get_config_tag(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
get_config_tag
response = accessor(DESIRED_CONFIGS_URL.format(cluster))
File "/var/lib/ambari-server/resources/scripts/configs.py", line 90, in 
do_request
raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
Exception: Problem with accessing api. Reason: 
{code}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23893) Using Configs.py throws

2018-05-18 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-23893:

Component/s: ambari-server

> Using Configs.py throws  certificate verify failed (_ssl.c:579) error
> 
>
> Key: AMBARI-23893
> URL: https://issues.apache.org/jira/browse/AMBARI-23893
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>
> {code}
> [root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 
> 8443 --protocol https --action get --host localhost --cluster cl1 
> --config-type hive-site --key hive.exec.post.hooks
> 2018-05-16 13:23:06,375 INFO ### Performing "get" content:
> Traceback (most recent call last):
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 
> 
> sys.exit(main())
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
> return get_properties(cluster, config_type, action_args, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
> get_properties
> get_config(cluster, config_type, accessor, output)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
> get_config
> properties, attributes = get_current_config(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
> get_current_config
> config_tag = get_config_tag(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
> get_config_tag
> response = accessor(DESIRED_CONFIGS_URL.format(cluster))
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 90, in 
> do_request
> raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
> Exception: Problem with accessing api. Reason:  CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23893) Using Configs.py throws

2018-05-18 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-23893:

Issue Type: Improvement  (was: Bug)

> Using Configs.py throws  certificate verify failed (_ssl.c:579) error
> 
>
> Key: AMBARI-23893
> URL: https://issues.apache.org/jira/browse/AMBARI-23893
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> {code}
> [root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 
> 8443 --protocol https --action get --host localhost --cluster cl1 
> --config-type hive-site --key hive.exec.post.hooks
> 2018-05-16 13:23:06,375 INFO ### Performing "get" content:
> Traceback (most recent call last):
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 
> 
> sys.exit(main())
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
> return get_properties(cluster, config_type, action_args, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
> get_properties
> get_config(cluster, config_type, accessor, output)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
> get_config
> properties, attributes = get_current_config(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
> get_current_config
> config_tag = get_config_tag(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
> get_config_tag
> response = accessor(DESIRED_CONFIGS_URL.format(cluster))
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 90, in 
> do_request
> raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
> Exception: Problem with accessing api. Reason:  CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23893) Using Configs.py throws

2018-05-18 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-23893:

Affects Version/s: 2.7.0

> Using Configs.py throws  certificate verify failed (_ssl.c:579) error
> 
>
> Key: AMBARI-23893
> URL: https://issues.apache.org/jira/browse/AMBARI-23893
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> {code}
> [root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 
> 8443 --protocol https --action get --host localhost --cluster cl1 
> --config-type hive-site --key hive.exec.post.hooks
> 2018-05-16 13:23:06,375 INFO ### Performing "get" content:
> Traceback (most recent call last):
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 
> 
> sys.exit(main())
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
> return get_properties(cluster, config_type, action_args, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
> get_properties
> get_config(cluster, config_type, accessor, output)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
> get_config
> properties, attributes = get_current_config(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
> get_current_config
> config_tag = get_config_tag(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
> get_config_tag
> response = accessor(DESIRED_CONFIGS_URL.format(cluster))
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 90, in 
> do_request
> raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
> Exception: Problem with accessing api. Reason:  CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23893) Using Configs.py throws

2018-05-18 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-23893:

Description: 
{code}
[root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 8443 
--protocol https --action get --host localhost --cluster cl1 --config-type 
hive-site --key hive.exec.post.hooks
2018-05-16 13:23:06,375 INFO ### Performing "get" content:
Traceback (most recent call last):
File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 

sys.exit(main())
File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
return get_properties(cluster, config_type, action_args, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
get_properties
get_config(cluster, config_type, accessor, output)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
get_config
properties, attributes = get_current_config(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
get_current_config
config_tag = get_config_tag(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
get_config_tag
response = accessor(DESIRED_CONFIGS_URL.format(cluster))
File "/var/lib/ambari-server/resources/scripts/configs.py", line 90, in 
do_request
raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
Exception: Problem with accessing api. Reason: 
{code}
configs.py script has no any option to disable SSL validation. Up from of 
Python 2.7.9, ssl validation is enabled by default at urllib2 library 
https://stackoverflow.com/a/19269164
So I'm adding a command line option to configs.py script that would allow to 
skip certificate validation



  was:


{code}
[root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 8443 
--protocol https --action get --host localhost --cluster cl1 --config-type 
hive-site --key hive.exec.post.hooks
2018-05-16 13:23:06,375 INFO ### Performing "get" content:
Traceback (most recent call last):
File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 

sys.exit(main())
File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
return get_properties(cluster, config_type, action_args, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
get_properties
get_config(cluster, config_type, accessor, output)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
get_config
properties, attributes = get_current_config(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
get_current_config
config_tag = get_config_tag(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
get_config_tag
response = accessor(DESIRED_CONFIGS_URL.format(cluster))
File "/var/lib/ambari-server/resources/scripts/configs.py", line 90, in 
do_request
raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
Exception: Problem with accessing api. Reason: 
{code}




> Using Configs.py throws  certificate verify failed (_ssl.c:579) error
> 
>
> Key: AMBARI-23893
> URL: https://issues.apache.org/jira/browse/AMBARI-23893
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> {code}
> [root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 
> 8443 --protocol https --action get --host localhost --cluster cl1 
> --config-type hive-site --key hive.exec.post.hooks
> 2018-05-16 13:23:06,375 INFO ### Performing "get" content:
> Traceback (most recent call last):
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 
> 
> sys.exit(main())
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
> return get_properties(cluster, config_type, action_args, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
> get_properties
> get_config(cluster, config_type, accessor, output)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
> get_config
> properties, attributes = get_current_config(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
> get_current_config
> config_tag = get_config_tag(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
> get_config_tag
> response = accessor(DESIRED_CONFIGS_URL.format(cluster))
> File 

[jira] [Resolved] (AMBARI-23893) Using Configs.py throws

2018-05-18 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-23893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko resolved AMBARI-23893.
-
Resolution: Fixed

> Using Configs.py throws  certificate verify failed (_ssl.c:579) error
> 
>
> Key: AMBARI-23893
> URL: https://issues.apache.org/jira/browse/AMBARI-23893
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 2.7.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code}
> [root@test ~]# /var/lib/ambari-server/resources/scripts/configs.py --port 
> 8443 --protocol https --action get --host localhost --cluster cl1 
> --config-type hive-site --key hive.exec.post.hooks
> 2018-05-16 13:23:06,375 INFO ### Performing "get" content:
> Traceback (most recent call last):
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 368, in 
> 
> sys.exit(main())
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 354, in main
> return get_properties(cluster, config_type, action_args, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 262, in 
> get_properties
> get_config(cluster, config_type, accessor, output)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 214, in 
> get_config
> properties, attributes = get_current_config(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 125, in 
> get_current_config
> config_tag = get_config_tag(cluster, config_type, accessor)
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 95, in 
> get_config_tag
> response = accessor(DESIRED_CONFIGS_URL.format(cluster))
> File "/var/lib/ambari-server/resources/scripts/configs.py", line 90, in 
> do_request
> raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
> Exception: Problem with accessing api. Reason:  CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)>
> {code}
> configs.py script has no any option to disable SSL validation. Up from of 
> Python 2.7.9, ssl validation is enabled by default at urllib2 library 
> https://stackoverflow.com/a/19269164
> So I'm adding a command line option to configs.py script that would allow to 
> skip certificate validation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24064) Missing LdapFacade in HostUpdateHelper

2018-06-08 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-24064:
---

 Summary: Missing LdapFacade in HostUpdateHelper
 Key: AMBARI-24064
 URL: https://issues.apache.org/jira/browse/AMBARI-24064
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.7.0
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
 Fix For: 2.7.0


The following error is encountered when executing executing
{noformat}
ambari-server update-host-names host_names_changes.json
{noformat}

{noformat}
2018-06-07 18:04:12,933 ERROR [main] HostUpdateHelper:573 - Unexpected error, 
host names update failed
com.google.inject.CreationException: Unable to create injector, see the 
following errors:

1) No implementation for org.apache.ambari.server.ldap.service.LdapFacade was 
bound.
  while locating org.apache.ambari.server.ldap.service.LdapFacade
for the 1st parameter of 
org.apache.ambari.server.controller.internal.AmbariServerLDAPConfigurationHandler.(AmbariServerLDAPConfigurationHandler.java:53)
  while locating 
org.apache.ambari.server.controller.internal.AmbariServerLDAPConfigurationHandler
for field at 
org.apache.ambari.server.controller.internal.RootServiceComponentConfigurationHandlerFactory.ldapConfigurationHandler(RootServiceComponentConfigurationHandlerFactory.java:33)
  while locating 
org.apache.ambari.server.controller.internal.RootServiceComponentConfigurationHandlerFactory
for field at 
org.apache.ambari.server.controller.internal.RootServiceComponentConfigurationResourceProvider.rootServiceComponentConfigurationHandlerFactory(RootServiceComponentConfigurationResourceProvider.java:48)
  at 
org.apache.ambari.server.controller.ResourceProviderFactory.getRootServiceHostComponentConfigurationResourceProvider(ResourceProviderFactory.java:1)
  at 
com.google.inject.assistedinject.FactoryProvider2.initialize(FactoryProvider2.java:666)
  at 
com.google.inject.assistedinject.FactoryModuleBuilder$1.configure(FactoryModuleBuilder.java:335)
 (via modules: 
org.apache.ambari.server.update.HostUpdateHelper$UpdateHelperModule -> 
com.google.inject.assistedinject.FactoryModuleBuilder$1)

1 error
at 
com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:470)
at 
com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:176)
at 
com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:110)
at com.google.inject.Guice.createInjector(Guice.java:99)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at 
org.apache.ambari.server.update.HostUpdateHelper.main(HostUpdateHelper.java:544)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24099) Service check failure is not skipped during upgrade even though "Skip All Service Check Failures" options is selected before Upgrade Start

2018-06-14 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24099:

Fix Version/s: 2.7.0

> Service check failure is not skipped during upgrade even though "Skip All 
> Service Check Failures" options is selected before Upgrade Start
> --
>
> Key: AMBARI-24099
> URL: https://issues.apache.org/jira/browse/AMBARI-24099
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Even after selecting "Skip all service check failures" option before starting 
> upgrade , the upgrade is not ignoring service check failures and upgrade gets 
> paused at the failure step.
> The expected behaviour is: it should skip the failed service check and have a 
> message like "There are failures that were automatically skipped".
> This behaviour is working fine for Slave Component failure skip.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24099) Service check failure is not skipped during upgrade even though "Skip All Service Check Failures" options is selected before Upgrade Start

2018-06-14 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24099:

Affects Version/s: 2.7.0

> Service check failure is not skipped during upgrade even though "Skip All 
> Service Check Failures" options is selected before Upgrade Start
> --
>
> Key: AMBARI-24099
> URL: https://issues.apache.org/jira/browse/AMBARI-24099
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Even after selecting "Skip all service check failures" option before starting 
> upgrade , the upgrade is not ignoring service check failures and upgrade gets 
> paused at the failure step.
> The expected behaviour is: it should skip the failed service check and have a 
> message like "There are failures that were automatically skipped".
> This behaviour is working fine for Slave Component failure skip.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24099) Service check failure is not skipped during upgrade even though "Skip All Service Check Failures" options is selected before Upgrade Start

2018-06-14 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24099:

Component/s: ambari-server

> Service check failure is not skipped during upgrade even though "Skip All 
> Service Check Failures" options is selected before Upgrade Start
> --
>
> Key: AMBARI-24099
> URL: https://issues.apache.org/jira/browse/AMBARI-24099
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>
> Even after selecting "Skip all service check failures" option before starting 
> upgrade , the upgrade is not ignoring service check failures and upgrade gets 
> paused at the failure step.
> The expected behaviour is: it should skip the failed service check and have a 
> message like "There are failures that were automatically skipped".
> This behaviour is working fine for Slave Component failure skip.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24099) Service check failure is not skipped during upgrade even though "Skip All Service Check Failures" options is selected before Upgrade Start

2018-06-14 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-24099:
---

 Summary: Service check failure is not skipped during upgrade even 
though "Skip All Service Check Failures" options is selected before Upgrade 
Start
 Key: AMBARI-24099
 URL: https://issues.apache.org/jira/browse/AMBARI-24099
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko



Even after selecting "Skip all service check failures" option before starting 
upgrade , the upgrade is not ignoring service check failures and upgrade gets 
paused at the failure step.
The expected behaviour is: it should skip the failed service check and have a 
message like "There are failures that were automatically skipped".

This behaviour is working fine for Slave Component failure skip.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24135) Stop of Knox Gateway fails after deleting Hive from cluster post upgrade.

2018-06-18 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24135:

Affects Version/s: 2.7.0

> Stop of Knox Gateway fails after deleting Hive from cluster post upgrade.
> -
>
> Key: AMBARI-24135
> URL: https://issues.apache.org/jira/browse/AMBARI-24135
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> *STR*
> 1) Deployed cluster with Ambari version: 2.6.2.0-155 and HDP version: 
> 2.6.2.0-205
> 2) Ambari upgrade - 2.7.0.0-709
> 3) Stack upgrade to 3.0.0.0-1478
> 4) Delete Hive from the cluster
> 5) Stop All Services
> Stop of Knox Gateway fails with below error:
> {code:java}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/knox_gateway.py",
>  line 215, in 
> KnoxGateway().execute()
> File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/knox_gateway.py",
>  line 152, in stop
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/params.py",
>  line 27, in 
> from params_linux import *
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/params_linux.py",
>  line 229, in 
> hive_server_host = hive_server_hosts[0]
> IndexError: list index out of range
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24135) Stop of Knox Gateway fails after deleting Hive from cluster post upgrade.

2018-06-18 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24135:

Fix Version/s: 2.7.0

> Stop of Knox Gateway fails after deleting Hive from cluster post upgrade.
> -
>
> Key: AMBARI-24135
> URL: https://issues.apache.org/jira/browse/AMBARI-24135
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> *STR*
> 1) Deployed cluster with Ambari version: 2.6.2.0-155 and HDP version: 
> 2.6.2.0-205
> 2) Ambari upgrade - 2.7.0.0-709
> 3) Stack upgrade to 3.0.0.0-1478
> 4) Delete Hive from the cluster
> 5) Stop All Services
> Stop of Knox Gateway fails with below error:
> {code:java}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/knox_gateway.py",
>  line 215, in 
> KnoxGateway().execute()
> File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/knox_gateway.py",
>  line 152, in stop
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/params.py",
>  line 27, in 
> from params_linux import *
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/params_linux.py",
>  line 229, in 
> hive_server_host = hive_server_hosts[0]
> IndexError: list index out of range
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24135) Stop of Knox Gateway fails after deleting Hive from cluster post upgrade.

2018-06-18 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24135:

Component/s: ambari-server

> Stop of Knox Gateway fails after deleting Hive from cluster post upgrade.
> -
>
> Key: AMBARI-24135
> URL: https://issues.apache.org/jira/browse/AMBARI-24135
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> *STR*
> 1) Deployed cluster with Ambari version: 2.6.2.0-155 and HDP version: 
> 2.6.2.0-205
> 2) Ambari upgrade - 2.7.0.0-709
> 3) Stack upgrade to 3.0.0.0-1478
> 4) Delete Hive from the cluster
> 5) Stop All Services
> Stop of Knox Gateway fails with below error:
> {code:java}
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/knox_gateway.py",
>  line 215, in 
> KnoxGateway().execute()
> File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/knox_gateway.py",
>  line 152, in stop
> import params
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/params.py",
>  line 27, in 
> from params_linux import *
> File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/params_linux.py",
>  line 229, in 
> hive_server_host = hive_server_hosts[0]
> IndexError: list index out of range
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24135) Stop of Knox Gateway fails after deleting Hive from cluster post upgrade.

2018-06-18 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-24135:
---

 Summary: Stop of Knox Gateway fails after deleting Hive from 
cluster post upgrade.
 Key: AMBARI-24135
 URL: https://issues.apache.org/jira/browse/AMBARI-24135
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko



*STR*
1) Deployed cluster with Ambari version: 2.6.2.0-155 and HDP version: 
2.6.2.0-205
2) Ambari upgrade - 2.7.0.0-709
3) Stack upgrade to 3.0.0.0-1478
4) Delete Hive from the cluster
5) Stop All Services
Stop of Knox Gateway fails with below error:


{code:java}
Traceback (most recent call last):
File 
"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/knox_gateway.py",
 line 215, in 
KnoxGateway().execute()
File 
"/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
line 353, in execute
method(env)
File 
"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/knox_gateway.py",
 line 152, in stop
import params
File 
"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/params.py",
 line 27, in 
from params_linux import *
File 
"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/KNOX/package/scripts/params_linux.py",
 line 229, in 
hive_server_host = hive_server_hosts[0]
IndexError: list index out of range
{code}






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24182) Assign Slaves and Clients Page missing warning message when no slave/client is selected

2018-06-25 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24182:

Affects Version/s: 2.7.0

> Assign Slaves and Clients Page missing warning message when no slave/client 
> is selected
> ---
>
> Key: AMBARI-24182
> URL: https://issues.apache.org/jira/browse/AMBARI-24182
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> If no hosts are selected in assign slaves and clients page Ambari used to 
> show warning in earlier versions, which is missing in Ambari-2.7.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24182) Assign Slaves and Clients Page missing warning message when no slave/client is selected

2018-06-25 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24182:

Component/s: ambari-server

> Assign Slaves and Clients Page missing warning message when no slave/client 
> is selected
> ---
>
> Key: AMBARI-24182
> URL: https://issues.apache.org/jira/browse/AMBARI-24182
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> If no hosts are selected in assign slaves and clients page Ambari used to 
> show warning in earlier versions, which is missing in Ambari-2.7.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24182) Assign Slaves and Clients Page missing warning message when no slave/client is selected

2018-06-25 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-24182:
---

 Summary: Assign Slaves and Clients Page missing warning message 
when no slave/client is selected
 Key: AMBARI-24182
 URL: https://issues.apache.org/jira/browse/AMBARI-24182
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko



If no hosts are selected in assign slaves and clients page Ambari used to show 
warning in earlier versions, which is missing in Ambari-2.7.0





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24182) Assign Slaves and Clients Page missing warning message when no slave/client is selected

2018-06-25 Thread Dmitry Lysnichenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-24182:

Fix Version/s: 2.7.0

> Assign Slaves and Clients Page missing warning message when no slave/client 
> is selected
> ---
>
> Key: AMBARI-24182
> URL: https://issues.apache.org/jira/browse/AMBARI-24182
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
> Fix For: 2.7.0
>
>
> If no hosts are selected in assign slaves and clients page Ambari used to 
> show warning in earlier versions, which is missing in Ambari-2.7.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    4   5   6   7   8   9   10   11   12   >