[jira] [Comment Edited] (AMBARI-24547) A foreign key constraint fails when deleting a cluster from ambari
[ https://issues.apache.org/jira/browse/AMBARI-24547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594934#comment-16594934 ] yangqk edited comment on AMBARI-24547 at 8/29/18 2:11 AM: -- there is no active job when i deleting a cluster and all services are stopped. if i send a schedule batch request ,like a NODEMANAGER DECOMMISSION ,wait until the request is complete, this error will happen when i removing the cluster via API request was (Author: yangqk): there is no active job when i deleting a cluster and all services are stopped. if i called a schedule batch request ,like a NODEMANAGER DECOMMISSION , this error will happen when i removing it via API request > A foreign key constraint fails when deleting a cluster from ambari > -- > > Key: AMBARI-24547 > URL: https://issues.apache.org/jira/browse/AMBARI-24547 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.0 >Reporter: yangqk >Priority: Critical > Labels: ambari-server > > when deleting a cluster which has been called some schedule requests, ambari > server will reponse 500 , ambari-server.log has a exception like this: > {code:java} > org.eclipse.persistence.exceptions.DatabaseException > Internal Exception: > com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: > Cannot delete or update a parent row: a foreign ke > y constraint fails (`aquila`.`request`, CONSTRAINT `FK_request_schedule_id` > FOREIGN KEY (`request_schedule_id`) REFERENCES `requestschedule` > (`schedule_id`)) > Error Code: 1451 > Call: DELETE FROM requestschedule WHERE (schedule_id = ?) > bind => [1 parameter bound] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24557) Remove legacy storm sink module from ambari-metrics.
[ https://issues.apache.org/jira/browse/AMBARI-24557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24557: Labels: pull-request-available (was: ) > Remove legacy storm sink module from ambari-metrics. > > > Key: AMBARI-24557 > URL: https://issues.apache.org/jira/browse/AMBARI-24557 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 3.0.0 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (AMBARI-24557) Remove legacy storm sink module from ambari-metrics.
Aravindan Vijayan created AMBARI-24557: -- Summary: Remove legacy storm sink module from ambari-metrics. Key: AMBARI-24557 URL: https://issues.apache.org/jira/browse/AMBARI-24557 Project: Ambari Issue Type: Task Components: ambari-metrics Affects Versions: 3.0.0 Reporter: Aravindan Vijayan Assignee: Aravindan Vijayan Fix For: 3.0.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24553) Cannot start Hive Metastore without HDFS
[ https://issues.apache.org/jira/browse/AMBARI-24553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595594#comment-16595594 ] Hudson commented on AMBARI-24553: - SUCCESS: Integrated in Jenkins build Ambari-branch-2.7 #203 (See [https://builds.apache.org/job/Ambari-branch-2.7/203/]) AMBARI-24553. Cannot start Hive Metastore without HDFS (#2186) (github: [https://gitbox.apache.org/repos/asf?p=ambari.git=commit=87b288c0d6ac4722f34fa6e6842987441e982866]) * (edit) ambari-server/src/main/resources/stacks/HDP/2.6/services/HIVE/configuration/hive-env.xml > Cannot start Hive Metastore without HDFS > > > Key: AMBARI-24553 > URL: https://issues.apache.org/jira/browse/AMBARI-24553 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.0 >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Major > Labels: pull-request-available > Fix For: 2.7.2 > > Time Spent: 1h > Remaining Estimate: 0h > > Starting Hive Metastore fails if HDFS is not present in the cluster with the > error: {{JAVA_HOME is not set and could not be found.}} > {noformat} > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", > line 211, in > HiveMetastore().execute() > File > "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", > line 353, in execute > method(env) > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", > line 61, in start > create_metastore_schema() # execute without config lock > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", > line 374, in create_metastore_schema > user = params.hive_user > File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line > 166, in __init__ > self.env.run() > File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", > line 160, in run > self.run_action(resource, action) > File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", > line 124, in run_action > provider_action() > File > "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", > line 263, in action_run > returns=self.resource.returns) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 72, in inner > result = function(command, **kwargs) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 102, in checked_call > tries=tries, try_sleep=try_sleep, > timeout_kill_strategy=timeout_kill_strategy, returns=returns) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 150, in _call_wrapper > result = _call(command, **kwargs_copy) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 314, in _call > raise ExecutionFailed(err_msg, code, out, err) > resource_management.core.exceptions.ExecutionFailed: Execution of 'export > HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; > /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql > -userName hive -passWord [PROTECTED] -verbose' returned 1. Error: JAVA_HOME > is not set and could not be found. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (AMBARI-24556) Aggregation across clusters are not being done in AMS for even when multiple cluster support is enabled.
Aravindan Vijayan created AMBARI-24556: -- Summary: Aggregation across clusters are not being done in AMS for even when multiple cluster support is enabled. Key: AMBARI-24556 URL: https://issues.apache.org/jira/browse/AMBARI-24556 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.7.1 Reporter: Aravindan Vijayan Assignee: Aravindan Vijayan Fix For: 2.7.2 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24540) Allow skipping Oozie DB schema creation for sysprepped cluster
[ https://issues.apache.org/jira/browse/AMBARI-24540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595574#comment-16595574 ] Hudson commented on AMBARI-24540: - FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #701 (See [https://builds.apache.org/job/Ambari-branch-2.6/701/]) AMBARI-24540. Remove duplicate `host_sys_prepped` variable (#2179) (github: [https://gitbox.apache.org/repos/asf?p=ambari.git=commit=350997c663f9f6641ffa97d0f8f122f39ab16592]) * (edit) ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml * (edit) ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params.py * (edit) ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py > Allow skipping Oozie DB schema creation for sysprepped cluster > -- > > Key: AMBARI-24540 > URL: https://issues.apache.org/jira/browse/AMBARI-24540 > Project: Ambari > Issue Type: Improvement > Components: ambari-server >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > Oozie DB schema may be manually pre-created to save time during initial > service start. However, {{ooziedb.sh}} could still take quite some time to > confirm that the schema exists. The goal of this change is to allow users > who pre-create Oozie DB schema to make Ambari skip managing the DB (create or > check existence). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (AMBARI-24553) Cannot start Hive Metastore without HDFS
[ https://issues.apache.org/jira/browse/AMBARI-24553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Doroszlai, Attila resolved AMBARI-24553. Resolution: Fixed > Cannot start Hive Metastore without HDFS > > > Key: AMBARI-24553 > URL: https://issues.apache.org/jira/browse/AMBARI-24553 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.0 >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Major > Labels: pull-request-available > Fix For: 2.7.2 > > Time Spent: 1h > Remaining Estimate: 0h > > Starting Hive Metastore fails if HDFS is not present in the cluster with the > error: {{JAVA_HOME is not set and could not be found.}} > {noformat} > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", > line 211, in > HiveMetastore().execute() > File > "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", > line 353, in execute > method(env) > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", > line 61, in start > create_metastore_schema() # execute without config lock > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", > line 374, in create_metastore_schema > user = params.hive_user > File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line > 166, in __init__ > self.env.run() > File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", > line 160, in run > self.run_action(resource, action) > File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", > line 124, in run_action > provider_action() > File > "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", > line 263, in action_run > returns=self.resource.returns) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 72, in inner > result = function(command, **kwargs) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 102, in checked_call > tries=tries, try_sleep=try_sleep, > timeout_kill_strategy=timeout_kill_strategy, returns=returns) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 150, in _call_wrapper > result = _call(command, **kwargs_copy) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 314, in _call > raise ExecutionFailed(err_msg, code, out, err) > resource_management.core.exceptions.ExecutionFailed: Execution of 'export > HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; > /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql > -userName hive -passWord [PROTECTED] -verbose' returned 1. Error: JAVA_HOME > is not set and could not be found. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (AMBARI-24550) Yarn Timeline Service V2 Reader goes down after Ambari Upgrade from 2.7.0.0 to 2.7.1.0
[ https://issues.apache.org/jira/browse/AMBARI-24550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar resolved AMBARI-24550. Resolution: Fixed > Yarn Timeline Service V2 Reader goes down after Ambari Upgrade from 2.7.0.0 > to 2.7.1.0 > -- > > Key: AMBARI-24550 > URL: https://issues.apache.org/jira/browse/AMBARI-24550 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Labels: pull-request-available > Fix For: 2.7.1 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > STR: > 1) Install cluster with Ambari2.7.0.0 + HDP-3.0.0.0 > 2) Upgrade Ambari to 2.7.1.0 > Yarn Timeline Service V2 Reader goes down after some time. > Reason: the placeholders in yarn.timeline-service.reader.webapp.address and > yarn.timeline-service.reader.webapp.https.address are no longer replaced by > the stack code so these values become empty. In this case the timeline reader > will use the default ports which may conflict with other ports used by > other components. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24536) Ambari SPNEGO breaks SSO redirect
[ https://issues.apache.org/jira/browse/AMBARI-24536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Levas updated AMBARI-24536: -- Status: Patch Available (was: In Progress) > Ambari SPNEGO breaks SSO redirect > - > > Key: AMBARI-24536 > URL: https://issues.apache.org/jira/browse/AMBARI-24536 > Project: Ambari > Issue Type: Bug > Components: ambari-server, security >Affects Versions: 2.6.0 >Reporter: Sean Roberts >Assignee: Robert Levas >Priority: Major > Labels: kerberos, pull-request-available, security, spnego, sso > Fix For: 2.7.2 > > Time Spent: 20m > Remaining Estimate: 0h > > When SPNEGO is enabled (`ambari-server setup-kerberos`), the SSO > (`ambari-server setup-sso`) redirect no longer works. > How to reproduce: > # Enable SSO `ambari-server setup-sso` > # `ambari-server restart` > # Visit Ambari and notice that you are redirected to the SSO system (i.e. > Knox) > # Enable SPNEGO `ambari-server setup-kerberos` > # `ambari-server restart` > # Visit Ambari and notice that you are *NOT redirected* to the SSO system > (i.e. Knox) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24536) Ambari SPNEGO breaks SSO redirect
[ https://issues.apache.org/jira/browse/AMBARI-24536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24536: Labels: kerberos pull-request-available security spnego sso (was: kerberos security spnego sso) > Ambari SPNEGO breaks SSO redirect > - > > Key: AMBARI-24536 > URL: https://issues.apache.org/jira/browse/AMBARI-24536 > Project: Ambari > Issue Type: Bug > Components: ambari-server, security >Affects Versions: 2.6.0 >Reporter: Sean Roberts >Assignee: Robert Levas >Priority: Major > Labels: kerberos, pull-request-available, security, spnego, sso > Fix For: 2.7.2 > > > When SPNEGO is enabled (`ambari-server setup-kerberos`), the SSO > (`ambari-server setup-sso`) redirect no longer works. > How to reproduce: > # Enable SSO `ambari-server setup-sso` > # `ambari-server restart` > # Visit Ambari and notice that you are redirected to the SSO system (i.e. > Knox) > # Enable SPNEGO `ambari-server setup-kerberos` > # `ambari-server restart` > # Visit Ambari and notice that you are *NOT redirected* to the SSO system > (i.e. Knox) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24538) OneFS mpack quicklinks require port, https
[ https://issues.apache.org/jira/browse/AMBARI-24538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Ketcherside updated AMBARI-24538: Description: Quicklinks for OneFS in the OneFS management pack currently have no port. Normally that's okay because OneFS is configured by default to redirect port 80 traffic to the management port, 8080. But if http file browse is enabled (`isi http settings modify --service=enabled`), that uses port 80. We need to point to port 8080. Also, OneFS will redirect from http to https, but a 400 error appears briefly. For better user experience the quicklinks should use https. Both the onefs_web_ui and onefs_hdfs_web_ui need to be fixed. was: Quicklinks for OneFS in the OneFS management pack currently have no port. Normally that's okay because OneFS is configured by default to redirect port 80 traffic to the management port, 8082. But if http file browse is enabled (`isi http settings modify --service=enabled`), that uses port 80. We need to point to port 8080. Also, OneFS will redirect from http to https, but a 400 error appears briefly. For better user experience the quicklinks should use https. Both the onefs_web_ui and onefs_hdfs_web_ui need to be fixed. > OneFS mpack quicklinks require port, https > -- > > Key: AMBARI-24538 > URL: https://issues.apache.org/jira/browse/AMBARI-24538 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 3.0.0 >Reporter: Robert Ketcherside >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Quicklinks for OneFS in the OneFS management pack currently have no port. > Normally that's okay because OneFS is configured by default to redirect port > 80 traffic to the management port, 8080. But if http file browse is enabled > (`isi http settings modify --service=enabled`), that uses port 80. We need to > point to port 8080. > Also, OneFS will redirect from http to https, but a 400 error appears > briefly. For better user experience the quicklinks should use https. > Both the onefs_web_ui and onefs_hdfs_web_ui need to be fixed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-23591) Improve inter-service/component dependencies
[ https://issues.apache.org/jira/browse/AMBARI-23591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Thorpe updated AMBARI-23591: Description: In order to better support service dependencies and third party or custom services, Ambari should be more flexible in how dependencies can be defined (some examples are listed below). We also need to make sure these dependencies are enforced in the same manner when installing the service and adding a component to a node. Currently, when adding a service, the component dependencies are only applied if the service is actually installed. For example, the following dependency would only be applied if HDFS was installed: HDFS/HDFS_CLIENT host true When adding a component to a different node, after the service has been installed, the dependencies within the metainfo.xml are then enforced. This means that even if you don't have HDFS listed as a require service, the HDFS Client dependency will still be enforced even if HDFS is not installed in the cluster. The behavior should be consistent. Add service and add component should both follow the same behavior in the Ambari UI. *Dependency Examples* Certain service dependencies are optional. Ex. Spark has a dependency on the Hive service but Hive is only needed if Spark Thrift Server component is installed. HDFS YARN *HIVE* Some services may be required but only if a certain setting is enabled. Ex. HDFS requires Zookeeper but only if NameNode HA is enabled. Some components may require a dependency on a service client only if that service is installed. Ex. if Atlas is installed, then IBM DB2 Big SQL head node requires the Atlas client. Some components may require either a slave component or a service client. This situation applies to components that can be installed either on slave nodes or edge nodes. Ex. with the IBM DB2 Big SQL service for a worker node, it can be installed either on a slave node with a DataNode or on an edge node with the HDFS client. was: In order to better support service dependencies and third party or custom services, Ambari should be more flexible in how dependencies can be defined (some examples are listed below). We also need to make sure these dependencies are enforced in the same manner when installing the service and adding a component to a node. Currently, when adding a service, the component dependencies are only applied if the service is actually installed. For example, the following dependency would only be applied if HDFS was installed: HDFS/HDFS_CLIENT host true When adding a component to a different node, after the service has been installed, the dependencies within the metainfo.xml are then enforced. This means that even if you don't have HDFS listed as a require service, the HDFS Client dependency will still be enforced even if HDFS is not installed in the cluster. The behavior should be consistent. Add service and add component should both follow the same behavior in the Ambari UI. *Dependency Examples* Certain service dependencies are optional. Ex. Spark has a dependency on the Hive service but Hive is only needed if Spark Thrift Server component is installed. HDFS YARN *HIVE* Some services may be required but only if a certain setting is enabled. Ex. HDFS requires Zookeeper but only if NameNode HA is enabled. Some components may require a dependency between a choice of components. Ex. IBM DB2 Big SQL worker node requires either an HDFS DataNode or an HDFS Client. Some components may require a dependency on a service client only if that service is installed. Ex. if Atlas is installed, then IBM DB2 Big SQL head node requires the Atlas client. Some components may require either a slave component or a service client. This situation applies to components that can be installed either on slave nodes or edge nodes. Ex. with the IBM DB2 Big SQL service for a worker node, it can be installed either on a slave node with a DataNode or on an edge node with the HDFS client. > Improve inter-service/component dependencies > > > Key: AMBARI-23591 > URL: https://issues.apache.org/jira/browse/AMBARI-23591 > Project: Ambari > Issue Type: Epic > Components: ambari-server >Affects Versions: 3.0.0 >Reporter: Tim Thorpe >Assignee: Jayush Luniya >Priority: Major > > In order to better support service dependencies and third party or custom > services, Ambari should be more flexible in how
[jira] [Updated] (AMBARI-24536) Ambari SPNEGO breaks SSO redirect
[ https://issues.apache.org/jira/browse/AMBARI-24536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Levas updated AMBARI-24536: -- Fix Version/s: 2.7.2 > Ambari SPNEGO breaks SSO redirect > - > > Key: AMBARI-24536 > URL: https://issues.apache.org/jira/browse/AMBARI-24536 > Project: Ambari > Issue Type: Bug > Components: ambari-server, security >Affects Versions: 2.6.0 >Reporter: Sean Roberts >Assignee: Robert Levas >Priority: Major > Labels: kerberos, security, spnego, sso > Fix For: 2.7.2 > > > When SPNEGO is enabled (`ambari-server setup-kerberos`), the SSO > (`ambari-server setup-sso`) redirect no longer works. > How to reproduce: > # Enable SSO `ambari-server setup-sso` > # `ambari-server restart` > # Visit Ambari and notice that you are redirected to the SSO system (i.e. > Knox) > # Enable SPNEGO `ambari-server setup-kerberos` > # `ambari-server restart` > # Visit Ambari and notice that you are *NOT redirected* to the SSO system > (i.e. Knox) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24555) Nifi Registry install fails
[ https://issues.apache.org/jira/browse/AMBARI-24555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Babiichuk updated AMBARI-24555: -- Status: Patch Available (was: Open) > Nifi Registry install fails > --- > > Key: AMBARI-24555 > URL: https://issues.apache.org/jira/browse/AMBARI-24555 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.7.0 >Reporter: Andrii Babiichuk >Assignee: Andrii Babiichuk >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.2 > > Time Spent: 1h > Remaining Estimate: 0h > > Facing issue installing Nifi Registry on HDP_HDF cluster. The create keytab > step in Ambari is failing during installation. Below exception is seen in > ambari logs. > > {code} > 2018-08-21 13:11:03,401 ERROR [Server Action Executor Worker 1305] > CreatePrincipalsServerAction:309 - Failed to create principal, - Failed to > create new principal - no principal specified > org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: > Failed to create new principal - no principal specified > at > org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.createPrincipal(MITKerberosOperationHandler.java:159) > at > org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:268) > at > org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:157) > at > org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:460) > at > org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:92) > at > org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550) > at > org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466) > at java.lang.Thread.run(Thread.java:748) > 2018-08-21 13:11:03,401 INFO [Server Action Executor Worker 1305] > KerberosServerAction:481 - Processing identities completed. > 2018-08-21 13:11:04,191 ERROR [ambari-action-scheduler] ActionScheduler:482 - > Operation completely failed, aborting request id: 117 > {code} > The Ambari UI should not display any properties from Kerberos identity blocks > that indicate they are referencing another Kerberos identity. There are 2 > ways we know this: > - The new/preferred way: the identity block has a non-empty/non-null > "reference" attribute > - The old (backwards compatible way): the identity block has a "name" > attribute the starts with a '/'. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24525) Accumulo does not startup in Federated Cluster
[ https://issues.apache.org/jira/browse/AMBARI-24525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Babiichuk updated AMBARI-24525: -- Status: Patch Available (was: Open) > Accumulo does not startup in Federated Cluster > -- > > Key: AMBARI-24525 > URL: https://issues.apache.org/jira/browse/AMBARI-24525 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.7.0 >Reporter: Andrii Babiichuk >Assignee: Andrii Babiichuk >Priority: Blocker > Labels: pull-request-available > Fix For: 2.7.2 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > In a manually setup federated cluster (not through deployNG) -- > Accumulo was installed and when trying to start, below error thrown -- > {noformat} > 2018-08-16 07:33:31,748 [start.Main] ERROR: Thread > 'org.apache.accumulo.master.state.SetGoalState' died. > java.lang.IllegalArgumentException: Expected fully qualified URI for > instance.volumes got ns2/apps/accumulo/data > at > org.apache.accumulo.core.volume.VolumeConfiguration.getVolumeUris(VolumeConfiguration.java:107) > at > org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:334) > {noformat} > Caused by incorrect config value: > {{instance.volumes = hdfs://ns1,ns2/apps/accumulo/data}} > where ns1 and ns2 are namespaces > Expected is -- > {{instance.volumes = > hdfs://ns1/apps/accumulo/data,hdfs://ns2/apps/accumulo/data}} > according to -- > https://accumulo.apache.org/docs/2.0/administration/multivolume -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24554) UX issues with yarn containers widget
[ https://issues.apache.org/jira/browse/AMBARI-24554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24554: Labels: pull-request-available (was: ) > UX issues with yarn containers widget > -- > > Key: AMBARI-24554 > URL: https://issues.apache.org/jira/browse/AMBARI-24554 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.7.0 >Reporter: Aleksandr Kovalenko >Assignee: Aleksandr Kovalenko >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > > # The n/a content in yarn container widget seems to be bold as compared to > other widgets where it is faded out. > # no padding among three n/a makes it look a little unintuitive -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24550) Yarn Timeline Service V2 Reader goes down after Ambari Upgrade from 2.7.0.0 to 2.7.1.0
[ https://issues.apache.org/jira/browse/AMBARI-24550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595125#comment-16595125 ] Hudson commented on AMBARI-24550: - FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #9898 (See [https://builds.apache.org/job/Ambari-trunk-Commit/9898/]) AMBARI-24550. Yarn Timeline Service V2 Reader goes down after Ambari (github: [https://gitbox.apache.org/repos/asf?p=ambari.git=commit=93e177c81a1cc438b80d5eddbbb3afe0549a79ee]) * (edit) ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog271.java * (edit) ambari-server/src/test/java/org/apache/ambari/server/upgrade/UpgradeCatalog271Test.java > Yarn Timeline Service V2 Reader goes down after Ambari Upgrade from 2.7.0.0 > to 2.7.1.0 > -- > > Key: AMBARI-24550 > URL: https://issues.apache.org/jira/browse/AMBARI-24550 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Labels: pull-request-available > Fix For: 2.7.1 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > STR: > 1) Install cluster with Ambari2.7.0.0 + HDP-3.0.0.0 > 2) Upgrade Ambari to 2.7.1.0 > Yarn Timeline Service V2 Reader goes down after some time. > Reason: the placeholders in yarn.timeline-service.reader.webapp.address and > yarn.timeline-service.reader.webapp.https.address are no longer replaced by > the stack code so these values become empty. In this case the timeline reader > will use the default ports which may conflict with other ports used by > other components. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24555) Nifi Registry install fails
[ https://issues.apache.org/jira/browse/AMBARI-24555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Babiichuk updated AMBARI-24555: -- Description: Facing issue installing Nifi Registry on HDP_HDF cluster. The create keytab step in Ambari is failing during installation. Below exception is seen in ambari logs. {code} 2018-08-21 13:11:03,401 ERROR [Server Action Executor Worker 1305] CreatePrincipalsServerAction:309 - Failed to create principal, - Failed to create new principal - no principal specified org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Failed to create new principal - no principal specified at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.createPrincipal(MITKerberosOperationHandler.java:159) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:268) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:157) at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:460) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:92) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466) at java.lang.Thread.run(Thread.java:748) 2018-08-21 13:11:03,401 INFO [Server Action Executor Worker 1305] KerberosServerAction:481 - Processing identities completed. 2018-08-21 13:11:04,191 ERROR [ambari-action-scheduler] ActionScheduler:482 - Operation completely failed, aborting request id: 117 {code} The Ambari UI should not display any properties from Kerberos identity blocks that indicate they are referencing another Kerberos identity. There are 2 ways we know this: - The new/preferred way: the identity block has a non-empty/non-null "reference" attribute - The old (backwards compatible way): the identity block has a "name" attribute the starts with a '/'. was: Facing issue installing Nifi Registry on HDP_HDF cluster. The create keytab step in Ambari is failing during installation. Below exception is seen in ambari logs. {code} 2018-08-21 13:11:03,401 ERROR [Server Action Executor Worker 1305] CreatePrincipalsServerAction:309 - Failed to create principal, - Failed to create new principal - no principal specified org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Failed to create new principal - no principal specified at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.createPrincipal(MITKerberosOperationHandler.java:159) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:268) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:157) at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:460) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:92) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466) at java.lang.Thread.run(Thread.java:748) 2018-08-21 13:11:03,401 INFO [Server Action Executor Worker 1305] KerberosServerAction:481 - Processing identities completed. 2018-08-21 13:11:04,191 ERROR [ambari-action-scheduler] ActionScheduler:482 - Operation completely failed, aborting request id: 117 {code} The Ambari UI should know not display any properties from Kerberos identity blocks that indicate they are referencing another Kerberos identity. There are 2 ways we know this: - The new/preferred way: the identity block has a non-empty/non-null "reference" attribute - The old (backwards compatible way): the identity block has a "name" attribute the starts with a '/'. > Nifi Registry install fails > --- > > Key: AMBARI-24555 > URL: https://issues.apache.org/jira/browse/AMBARI-24555 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.7.0 >Reporter: Andrii Babiichuk >Assignee: Andrii Babiichuk >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.2 > > Time Spent: 20m > Remaining Estimate: 0h > > Facing
[jira] [Updated] (AMBARI-24555) Nifi Registry install fails
[ https://issues.apache.org/jira/browse/AMBARI-24555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24555: Labels: pull-request-available (was: ) > Nifi Registry install fails > --- > > Key: AMBARI-24555 > URL: https://issues.apache.org/jira/browse/AMBARI-24555 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.7.0 >Reporter: Andrii Babiichuk >Assignee: Andrii Babiichuk >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.2 > > > Facing issue installing Nifi Registry on HDP_HDF cluster. The create keytab > step in Ambari is failing during installation. Below exception is seen in > ambari logs. > > {code} > 2018-08-21 13:11:03,401 ERROR [Server Action Executor Worker 1305] > CreatePrincipalsServerAction:309 - Failed to create principal, - Failed to > create new principal - no principal specified > org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: > Failed to create new principal - no principal specified > at > org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.createPrincipal(MITKerberosOperationHandler.java:159) > at > org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:268) > at > org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:157) > at > org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:460) > at > org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:92) > at > org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550) > at > org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466) > at java.lang.Thread.run(Thread.java:748) > 2018-08-21 13:11:03,401 INFO [Server Action Executor Worker 1305] > KerberosServerAction:481 - Processing identities completed. > 2018-08-21 13:11:04,191 ERROR [ambari-action-scheduler] ActionScheduler:482 - > Operation completely failed, aborting request id: 117 > {code} > The Ambari UI should know not display any properties from Kerberos identity > blocks that indicate they are referencing another Kerberos identity. There > are 2 ways we know this: > - The new/preferred way: the identity block has a non-empty/non-null > "reference" attribute > - The old (backwards compatible way): the identity block has a "name" > attribute the starts with a '/'. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24555) Nifi Registry install fails
[ https://issues.apache.org/jira/browse/AMBARI-24555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Babiichuk updated AMBARI-24555: -- Description: Facing issue installing Nifi Registry on HDP_HDF cluster. The create keytab step in Ambari is failing during installation. Below exception is seen in ambari logs. {code} 2018-08-21 13:11:03,401 ERROR [Server Action Executor Worker 1305] CreatePrincipalsServerAction:309 - Failed to create principal, - Failed to create new principal - no principal specified org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Failed to create new principal - no principal specified at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.createPrincipal(MITKerberosOperationHandler.java:159) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:268) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:157) at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:460) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:92) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466) at java.lang.Thread.run(Thread.java:748) 2018-08-21 13:11:03,401 INFO [Server Action Executor Worker 1305] KerberosServerAction:481 - Processing identities completed. 2018-08-21 13:11:04,191 ERROR [ambari-action-scheduler] ActionScheduler:482 - Operation completely failed, aborting request id: 117 {code} The Ambari UI should know not display any properties from Kerberos identity blocks that indicate they are referencing another Kerberos identity. There are 2 ways we know this: - The new/preferred way: the identity block has a non-empty/non-null "reference" attribute - The old (backwards compatible way): the identity block has a "name" attribute the starts with a '/'. was: Facing issue installing Nifi Registry on HDP_HDF cluster. The create keytab step in Ambari is failing during installation. Below exception is seen in ambari logs. {code} 2018-08-21 13:11:03,401 ERROR [Server Action Executor Worker 1305] CreatePrincipalsServerAction:309 - Failed to create principal, - Failed to create new principal - no principal specified org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Failed to create new principal - no principal specified at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.createPrincipal(MITKerberosOperationHandler.java:159) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:268) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:157) at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:460) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:92) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466) at java.lang.Thread.run(Thread.java:748) 2018-08-21 13:11:03,401 INFO [Server Action Executor Worker 1305] KerberosServerAction:481 - Processing identities completed. 2018-08-21 13:11:04,191 ERROR [ambari-action-scheduler] ActionScheduler:482 - Operation completely failed, aborting request id: 117 {code} > Nifi Registry install fails > --- > > Key: AMBARI-24555 > URL: https://issues.apache.org/jira/browse/AMBARI-24555 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.7.0 >Reporter: Andrii Babiichuk >Assignee: Andrii Babiichuk >Priority: Critical > Fix For: 2.7.2 > > > Facing issue installing Nifi Registry on HDP_HDF cluster. The create keytab > step in Ambari is failing during installation. Below exception is seen in > ambari logs. > > {code} > 2018-08-21 13:11:03,401 ERROR [Server Action Executor Worker 1305] > CreatePrincipalsServerAction:309 - Failed to create principal, - Failed to > create new principal - no principal specified > org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: > Failed to create
[jira] [Created] (AMBARI-24555) Nifi Registry install fails
Andrii Babiichuk created AMBARI-24555: - Summary: Nifi Registry install fails Key: AMBARI-24555 URL: https://issues.apache.org/jira/browse/AMBARI-24555 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.7.0 Reporter: Andrii Babiichuk Assignee: Andrii Babiichuk Fix For: 2.7.2 Facing issue installing Nifi Registry on HDP_HDF cluster. The create keytab step in Ambari is failing during installation. Below exception is seen in ambari logs. {code} 2018-08-21 13:11:03,401 ERROR [Server Action Executor Worker 1305] CreatePrincipalsServerAction:309 - Failed to create principal, - Failed to create new principal - no principal specified org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Failed to create new principal - no principal specified at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.createPrincipal(MITKerberosOperationHandler.java:159) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:268) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:157) at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:460) at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:92) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466) at java.lang.Thread.run(Thread.java:748) 2018-08-21 13:11:03,401 INFO [Server Action Executor Worker 1305] KerberosServerAction:481 - Processing identities completed. 2018-08-21 13:11:04,191 ERROR [ambari-action-scheduler] ActionScheduler:482 - Operation completely failed, aborting request id: 117 {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (AMBARI-24554) UX issues with yarn containers widget
Aleksandr Kovalenko created AMBARI-24554: Summary: UX issues with yarn containers widget Key: AMBARI-24554 URL: https://issues.apache.org/jira/browse/AMBARI-24554 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.7.0 Reporter: Aleksandr Kovalenko Assignee: Aleksandr Kovalenko Fix For: 3.0.0 # The n/a content in yarn container widget seems to be bold as compared to other widgets where it is faded out. # no padding among three n/a makes it look a little unintuitive -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24553) Cannot start Hive Metastore without HDFS
[ https://issues.apache.org/jira/browse/AMBARI-24553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24553: Labels: pull-request-available (was: ) > Cannot start Hive Metastore without HDFS > > > Key: AMBARI-24553 > URL: https://issues.apache.org/jira/browse/AMBARI-24553 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.0 >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Major > Labels: pull-request-available > Fix For: 2.7.2 > > > Starting Hive Metastore fails if HDFS is not present in the cluster with the > error: {{JAVA_HOME is not set and could not be found.}} > {noformat} > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", > line 211, in > HiveMetastore().execute() > File > "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", > line 353, in execute > method(env) > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", > line 61, in start > create_metastore_schema() # execute without config lock > File > "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", > line 374, in create_metastore_schema > user = params.hive_user > File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line > 166, in __init__ > self.env.run() > File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", > line 160, in run > self.run_action(resource, action) > File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", > line 124, in run_action > provider_action() > File > "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", > line 263, in action_run > returns=self.resource.returns) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 72, in inner > result = function(command, **kwargs) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 102, in checked_call > tries=tries, try_sleep=try_sleep, > timeout_kill_strategy=timeout_kill_strategy, returns=returns) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 150, in _call_wrapper > result = _call(command, **kwargs_copy) > File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line > 314, in _call > raise ExecutionFailed(err_msg, code, out, err) > resource_management.core.exceptions.ExecutionFailed: Execution of 'export > HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; > /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql > -userName hive -passWord [PROTECTED] -verbose' returned 1. Error: JAVA_HOME > is not set and could not be found. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24552) Storm service-check fails due to missung StringUtils class definition
[ https://issues.apache.org/jira/browse/AMBARI-24552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24552: Labels: pull-request-available (was: ) > Storm service-check fails due to missung StringUtils class definition > - > > Key: AMBARI-24552 > URL: https://issues.apache.org/jira/browse/AMBARI-24552 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.6.2 >Reporter: Dmytro Sen >Assignee: Dmytro Sen >Priority: Blocker > Labels: pull-request-available > Fix For: 2.7.2 > > > {code:java} > 2018-08-21 21:46:35.810 o.a.s.util > Thread-24-__metricsorg.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink-executor[4 > 4] [ERROR] Async loop died! > java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils > at > org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink.prepare(StormTimelineMetricsSink.java:133) > ~[ambari-metrics-storm-sink-with-common-2.6.1.0.144.jar:?] > at > org.apache.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:75) > ~[storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] > at > org.apache.storm.daemon.executor$fn__10252$fn__10265.invoke(executor.clj:800) > ~[storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] > at org.apache.storm.util$async_loop$fn__553.invoke(util.clj:482) > [storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] > at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] > Caused by: java.lang.ClassNotFoundException: > org.apache.commons.lang.StringUtils > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > ~[?:1.8.0_112] > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > ~[?:1.8.0_112] > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > ~[?:1.8.0_112] > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ~[?:1.8.0_112] > ... 6 more{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24552) Storm service-check fails due to missung StringUtils class definition
[ https://issues.apache.org/jira/browse/AMBARI-24552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Sen updated AMBARI-24552: Summary: Storm service-check fails due to missung StringUtils class definition (was: Storm service-check failure due to missung StringUtils class definition) > Storm service-check fails due to missung StringUtils class definition > - > > Key: AMBARI-24552 > URL: https://issues.apache.org/jira/browse/AMBARI-24552 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.6.2 >Reporter: Dmytro Sen >Assignee: Dmytro Sen >Priority: Blocker > Fix For: 2.7.2 > > > {code:java} > 2018-08-21 21:46:35.810 o.a.s.util > Thread-24-__metricsorg.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink-executor[4 > 4] [ERROR] Async loop died! > java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils > at > org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink.prepare(StormTimelineMetricsSink.java:133) > ~[ambari-metrics-storm-sink-with-common-2.6.1.0.144.jar:?] > at > org.apache.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:75) > ~[storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] > at > org.apache.storm.daemon.executor$fn__10252$fn__10265.invoke(executor.clj:800) > ~[storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] > at org.apache.storm.util$async_loop$fn__553.invoke(util.clj:482) > [storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] > at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] > Caused by: java.lang.ClassNotFoundException: > org.apache.commons.lang.StringUtils > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > ~[?:1.8.0_112] > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > ~[?:1.8.0_112] > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > ~[?:1.8.0_112] > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ~[?:1.8.0_112] > ... 6 more{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24547) A foreign key constraint fails when deleting a cluster from ambari
[ https://issues.apache.org/jira/browse/AMBARI-24547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594934#comment-16594934 ] yangqk commented on AMBARI-24547: - there is no active job when i deleting a cluster and all services are stopped. if i called a schedule batch request ,like a NODEMANAGER DECOMMISSION , this error will happen when i removing it via API request > A foreign key constraint fails when deleting a cluster from ambari > -- > > Key: AMBARI-24547 > URL: https://issues.apache.org/jira/browse/AMBARI-24547 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.0 >Reporter: yangqk >Priority: Critical > Labels: ambari-server > > when deleting a cluster which has been called some schedule requests, ambari > server will reponse 500 , ambari-server.log has a exception like this: > {code:java} > org.eclipse.persistence.exceptions.DatabaseException > Internal Exception: > com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: > Cannot delete or update a parent row: a foreign ke > y constraint fails (`aquila`.`request`, CONSTRAINT `FK_request_schedule_id` > FOREIGN KEY (`request_schedule_id`) REFERENCES `requestschedule` > (`schedule_id`)) > Error Code: 1451 > Call: DELETE FROM requestschedule WHERE (schedule_id = ?) > bind => [1 parameter bound] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (AMBARI-24552) Storm service-check failure due to missung StringUtils class definition
Dmytro Sen created AMBARI-24552: --- Summary: Storm service-check failure due to missung StringUtils class definition Key: AMBARI-24552 URL: https://issues.apache.org/jira/browse/AMBARI-24552 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.6.2 Reporter: Dmytro Sen Assignee: Dmytro Sen Fix For: 2.7.2 {code:java} 2018-08-21 21:46:35.810 o.a.s.util Thread-24-__metricsorg.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink-executor[4 4] [ERROR] Async loop died! java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils at org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink.prepare(StormTimelineMetricsSink.java:133) ~[ambari-metrics-storm-sink-with-common-2.6.1.0.144.jar:?] at org.apache.storm.metric.MetricsConsumerBolt.prepare(MetricsConsumerBolt.java:75) ~[storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] at org.apache.storm.daemon.executor$fn__10252$fn__10265.invoke(executor.clj:800) ~[storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] at org.apache.storm.util$async_loop$fn__553.invoke(util.clj:482) [storm-core-1.1.0.2.6.6.0-26.jar:1.1.0.2.6.6.0-26] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: java.lang.ClassNotFoundException: org.apache.commons.lang.StringUtils at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_112] at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_112] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) ~[?:1.8.0_112] at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_112] ... 6 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24551) [Log Search UI] get rid of redundant requests after undoing or redoing several history steps
[ https://issues.apache.org/jira/browse/AMBARI-24551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Tobias updated AMBARI-24551: --- Status: Patch Available (was: In Progress) > [Log Search UI] get rid of redundant requests after undoing or redoing > several history steps > > > Key: AMBARI-24551 > URL: https://issues.apache.org/jira/browse/AMBARI-24551 > Project: Ambari > Issue Type: Bug > Components: ambari-logsearch, logsearch >Affects Versions: 2.7.1 >Reporter: Istvan Tobias >Assignee: Istvan Tobias >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.2 > > Original Estimate: 4h > Time Spent: 2h 20m > Remaining Estimate: 1h 40m > > After undoing or redoing more than one history items several redundant API > requests are sent. This occurs because changes for several filter controls > are applied step-by-step, and each control change generates new request. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24551) [Log Search UI] get rid of redundant requests after undoing or redoing several history steps
[ https://issues.apache.org/jira/browse/AMBARI-24551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24551: Labels: pull-request-available (was: ) > [Log Search UI] get rid of redundant requests after undoing or redoing > several history steps > > > Key: AMBARI-24551 > URL: https://issues.apache.org/jira/browse/AMBARI-24551 > Project: Ambari > Issue Type: Bug > Components: ambari-logsearch, logsearch >Affects Versions: 2.7.1 >Reporter: Istvan Tobias >Assignee: Istvan Tobias >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.2 > > Original Estimate: 4h > Remaining Estimate: 4h > > After undoing or redoing more than one history items several redundant API > requests are sent. This occurs because changes for several filter controls > are applied step-by-step, and each control change generates new request. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24551) [Log Search UI] get rid of redundant requests after undoing or redoing several history steps
[ https://issues.apache.org/jira/browse/AMBARI-24551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Tobias updated AMBARI-24551: --- Fix Version/s: 2.7.2 > [Log Search UI] get rid of redundant requests after undoing or redoing > several history steps > > > Key: AMBARI-24551 > URL: https://issues.apache.org/jira/browse/AMBARI-24551 > Project: Ambari > Issue Type: Bug > Components: ambari-logsearch, logsearch >Affects Versions: 2.7.1 >Reporter: Istvan Tobias >Assignee: Istvan Tobias >Priority: Minor > Fix For: 2.7.2 > > Original Estimate: 4h > Remaining Estimate: 4h > > After undoing or redoing more than one history items several redundant API > requests are sent. This occurs because changes for several filter controls > are applied step-by-step, and each control change generates new request. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (AMBARI-24551) [Log Search UI] get rid of redundant requests after undoing or redoing several history steps
Istvan Tobias created AMBARI-24551: -- Summary: [Log Search UI] get rid of redundant requests after undoing or redoing several history steps Key: AMBARI-24551 URL: https://issues.apache.org/jira/browse/AMBARI-24551 Project: Ambari Issue Type: Bug Components: ambari-logsearch, logsearch Affects Versions: 2.7.1 Reporter: Istvan Tobias Assignee: Istvan Tobias After undoing or redoing more than one history items several redundant API requests are sent. This occurs because changes for several filter controls are applied step-by-step, and each control change generates new request. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24539) OneFS mpack should not include webhdfs enable setting
[ https://issues.apache.org/jira/browse/AMBARI-24539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594896#comment-16594896 ] Hudson commented on AMBARI-24539: - FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #9896 (See [https://builds.apache.org/job/Ambari-trunk-Commit/9896/]) AMBARI-24539: OneFS mpack should not include webhdfs enable setting (m.magyar3: [https://gitbox.apache.org/repos/asf?p=ambari.git=commit=914985e406be9c4fb6a728a72b0b4644c98879da]) * (edit) contrib/management-packs/isilon-onefs-mpack/src/main/resources/addon-services/ONEFS/1.0.0/configuration/hdfs-site.xml > OneFS mpack should not include webhdfs enable setting > - > > Key: AMBARI-24539 > URL: https://issues.apache.org/jira/browse/AMBARI-24539 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 3.0.0 >Reporter: Robert Ketcherside >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > webhdfs.dfs.enabled is included in the configurations for OneFS management > pack. That is not needed because OneFS 8.1.2.0 (used with Ambari 2.7 and the > mpack) does not require the disablement of webhdfs in Ambari to support > Ambari Views. > webhdfs.dfs.enabled property should be removed entirely from > configurations/hdfs-site.xml -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24538) OneFS mpack quicklinks require port, https
[ https://issues.apache.org/jira/browse/AMBARI-24538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594897#comment-16594897 ] Hudson commented on AMBARI-24538: - FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #9896 (See [https://builds.apache.org/job/Ambari-trunk-Commit/9896/]) AMBARI-24538: OneFS mpack quicklinks require port, https (#2169) (m.magyar3: [https://gitbox.apache.org/repos/asf?p=ambari.git=commit=251aec097c805f672ca1d411ff58bb37a2cb060e]) * (edit) contrib/management-packs/isilon-onefs-mpack/src/main/resources/addon-services/ONEFS/1.0.0/quicklinks/quicklinks.json > OneFS mpack quicklinks require port, https > -- > > Key: AMBARI-24538 > URL: https://issues.apache.org/jira/browse/AMBARI-24538 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 3.0.0 >Reporter: Robert Ketcherside >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Quicklinks for OneFS in the OneFS management pack currently have no port. > Normally that's okay because OneFS is configured by default to redirect port > 80 traffic to the management port, 8082. But if http file browse is enabled > (`isi http settings modify --service=enabled`), that uses port 80. We need to > point to port 8080. > Also, OneFS will redirect from http to https, but a 400 error appears > briefly. For better user experience the quicklinks should use https. > Both the onefs_web_ui and onefs_hdfs_web_ui need to be fixed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24400) Tests in ambari-metrics fail with java.lang.NoSuchFieldError: mapper
[ https://issues.apache.org/jira/browse/AMBARI-24400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594869#comment-16594869 ] Alisha Prabhu commented on AMBARI-24400: Hi [~avijayan], The command used before running tests is : mvn clean install -DskipTests Observed that the test fail with the above error by using below cmd in ambari-metrics : mvn -Dtest=AggregatedMetricsPublisherTest -DfailIfNoTests=false test whereas, they pass by using 'clean' explicitly : mvn clean -Dtest=AggregatedMetricsPublisherTest -DfailIfNoTests=false test I am unable to understand this behaviour of the test case. Please guide me with the right approach. > Tests in ambari-metrics fail with java.lang.NoSuchFieldError: mapper > > > Key: AMBARI-24400 > URL: https://issues.apache.org/jira/browse/AMBARI-24400 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Reporter: Alisha Prabhu >Priority: Major > Labels: ppc64le, x86_64 > Attachments: AMBARI-24400.patch > > > Commands used at ./ambari/ambari-metrics/ : > mvn -Dtest=AggregatedMetricsPublisherTest -DfailIfNoTests=false test > mvn -Dtest=RawMetricsPublisherTest -DfailIfNoTests=false test > Error : > {code:java} > Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.641 sec <<< > FAILURE! - in > org.apache.hadoop.metrics2.sink.timeline.AggregatedMetricsPublisherTest > testProcessMetrics(org.apache.hadoop.metrics2.sink.timeline.AggregatedMetricsPublisherTest) > Time elapsed: 0.043 sec <<< ERROR! > java.lang.NoSuchFieldError: mapper > at > org.apache.hadoop.metrics2.sink.timeline.AggregatedMetricsPublisherTest.testProcessMetrics(AggregatedMetricsPublisherTest.java:65) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24550) Yarn Timeline Service V2 Reader goes down after Ambari Upgrade from 2.7.0.0 to 2.7.1.0
[ https://issues.apache.org/jira/browse/AMBARI-24550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24550: Labels: pull-request-available (was: ) > Yarn Timeline Service V2 Reader goes down after Ambari Upgrade from 2.7.0.0 > to 2.7.1.0 > -- > > Key: AMBARI-24550 > URL: https://issues.apache.org/jira/browse/AMBARI-24550 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Labels: pull-request-available > Fix For: 2.7.1 > > > STR: > 1) Install cluster with Ambari2.7.0.0 + HDP-3.0.0.0 > 2) Upgrade Ambari to 2.7.1.0 > Yarn Timeline Service V2 Reader goes down after some time. > Reason: the placeholders in yarn.timeline-service.reader.webapp.address and > yarn.timeline-service.reader.webapp.https.address are no longer replaced by > the stack code so these values become empty. In this case the timeline reader > will use the default ports which may conflict with other ports used by > other components. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (AMBARI-24550) Yarn Timeline Service V2 Reader goes down after Ambari Upgrade from 2.7.0.0 to 2.7.1.0
Attila Magyar created AMBARI-24550: -- Summary: Yarn Timeline Service V2 Reader goes down after Ambari Upgrade from 2.7.0.0 to 2.7.1.0 Key: AMBARI-24550 URL: https://issues.apache.org/jira/browse/AMBARI-24550 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.7.0 Reporter: Attila Magyar Assignee: Attila Magyar Fix For: 2.7.1 STR: 1) Install cluster with Ambari2.7.0.0 + HDP-3.0.0.0 2) Upgrade Ambari to 2.7.1.0 Yarn Timeline Service V2 Reader goes down after some time. Reason: the placeholders in yarn.timeline-service.reader.webapp.address and yarn.timeline-service.reader.webapp.https.address are no longer replaced by the stack code so these values become empty. In this case the timeline reader will use the default ports which may conflict with other ports used by other components. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (AMBARI-24549) Move blueprint provisioning state property to host component level
Myroslav Papirkovskyi created AMBARI-24549: -- Summary: Move blueprint provisioning state property to host component level Key: AMBARI-24549 URL: https://issues.apache.org/jira/browse/AMBARI-24549 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.7.1 Reporter: Myroslav Papirkovskyi Assignee: Myroslav Papirkovskyi Fix For: 2.7.2 Move blueprint provisioning state property from cluster level to host component level. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24516) Default value for LDAP type
[ https://issues.apache.org/jira/browse/AMBARI-24516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated AMBARI-24516: Labels: pull-request-available (was: ) > Default value for LDAP type > --- > > Key: AMBARI-24516 > URL: https://issues.apache.org/jira/browse/AMBARI-24516 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.1 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Labels: pull-request-available > Fix For: 2.7.2 > > > 1. Set default value for ldap type 'Generic' > 2. command line option for disable asking ldap-type. Use Generic defaults for > if set and properties are not exists and not given. > 3. Ask for ldap type only if any of the properties which default value is > depending from ldap type is missing. > 4. Ask for the user credentials and queries Ambari for the existing values > first. Than offer these values as defaults. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (AMBARI-24548) Allow skipping Hive Metastore schema creation for sysprepped cluster
Doroszlai, Attila created AMBARI-24548: -- Summary: Allow skipping Hive Metastore schema creation for sysprepped cluster Key: AMBARI-24548 URL: https://issues.apache.org/jira/browse/AMBARI-24548 Project: Ambari Issue Type: Improvement Components: ambari-server Reporter: Doroszlai, Attila Assignee: Doroszlai, Attila Similar to AMBARI-24540, Hive Metastore DB schema may be manually pre-created to save time during initial service start. However, {{schematool}} could still take quite some time to confirm that the schema exists. The goal of this change is to allow users who pre-create Hive Metastore DB schema to make Ambari skip managing the DB (create or check existence). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24547) A foreign key constraint fails when deleting a cluster from ambari
[ https://issues.apache.org/jira/browse/AMBARI-24547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594691#comment-16594691 ] Dmytro Grinenko commented on AMBARI-24547: -- [~yangqk] did you mean, that when on cluster are going any active job - removing it via API request will produce HTTP error 500 with behavior stated in description? > A foreign key constraint fails when deleting a cluster from ambari > -- > > Key: AMBARI-24547 > URL: https://issues.apache.org/jira/browse/AMBARI-24547 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.5.0 >Reporter: yangqk >Priority: Critical > Labels: ambari-server > > when deleting a cluster which has been called some schedule requests, ambari > server will reponse 500 , ambari-server.log has a exception like this: > {code:java} > org.eclipse.persistence.exceptions.DatabaseException > Internal Exception: > com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: > Cannot delete or update a parent row: a foreign ke > y constraint fails (`aquila`.`request`, CONSTRAINT `FK_request_schedule_id` > FOREIGN KEY (`request_schedule_id`) REFERENCES `requestschedule` > (`schedule_id`)) > Error Code: 1451 > Call: DELETE FROM requestschedule WHERE (schedule_id = ?) > bind => [1 parameter bound] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24540) Allow skipping Oozie DB schema creation for sysprepped cluster
[ https://issues.apache.org/jira/browse/AMBARI-24540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594661#comment-16594661 ] Hudson commented on AMBARI-24540: - FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #700 (See [https://builds.apache.org/job/Ambari-branch-2.6/700/]) AMBARI-24540. Allow skipping Oozie DB schema creation for sysprepped (github: [https://gitbox.apache.org/repos/asf?p=ambari.git=commit=1f05db9a322433ce7dc252288fb835133f281407]) * (edit) ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml * (edit) ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py * (edit) ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_service.py > Allow skipping Oozie DB schema creation for sysprepped cluster > -- > > Key: AMBARI-24540 > URL: https://issues.apache.org/jira/browse/AMBARI-24540 > Project: Ambari > Issue Type: Improvement > Components: ambari-server >Reporter: Doroszlai, Attila >Assignee: Doroszlai, Attila >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Oozie DB schema may be manually pre-created to save time during initial > service start. However, {{ooziedb.sh}} could still take quite some time to > confirm that the schema exists. The goal of this change is to allow users > who pre-create Oozie DB schema to make Ambari skip managing the DB (create or > check existence). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24516) Default value for LDAP type
[ https://issues.apache.org/jira/browse/AMBARI-24516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated AMBARI-24516: Description: 1. Set default value for ldap type 'Generic' 2. command line option for disable asking ldap-type. Use Generic defaults for if set and properties are not exists and not given. 3. Ask for ldap type only if any of the properties which default value is depending from ldap type is missing. 4. Ask for the user credentials and queries Ambari for the existing values first. Than offer these values as defaults. was: 1. Set default value for ldap type 'Generic' 2. command line option for configuring the ldap type 3. Ask for ldap type only if any of the properties which default value is depending from ldap type is missing. 4. Asks for the user credentials and queries Ambari for the existing values first. Than offer these values as defaults. > Default value for LDAP type > --- > > Key: AMBARI-24516 > URL: https://issues.apache.org/jira/browse/AMBARI-24516 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.1 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Fix For: 2.7.2 > > > 1. Set default value for ldap type 'Generic' > 2. command line option for disable asking ldap-type. Use Generic defaults for > if set and properties are not exists and not given. > 3. Ask for ldap type only if any of the properties which default value is > depending from ldap type is missing. > 4. Ask for the user credentials and queries Ambari for the existing values > first. Than offer these values as defaults. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24516) Default value for LDAP type
[ https://issues.apache.org/jira/browse/AMBARI-24516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated AMBARI-24516: Description: 1. Set default value for ldap type 'Generic' 2. command line option for configuring the ldap type 3. Ask for ldap type only if any of the properties which default value is depending from ldap type is missing. 4. Asks for the user credentials and queries Ambari for the existing values first. Than offer these values as defaults. was: 1. Set default value for ldap type 'Generic' 2. command line option for configuring the ldap type 3. Ask for ldap type only if any of the properties which default value is depending from ldap type is missing. 3. Asks for the user credentials and queries Ambari for the existing values first. Than offer these values as defaults. > Default value for LDAP type > --- > > Key: AMBARI-24516 > URL: https://issues.apache.org/jira/browse/AMBARI-24516 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.1 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Fix For: 2.7.2 > > > 1. Set default value for ldap type 'Generic' > 2. command line option for configuring the ldap type > 3. Ask for ldap type only if any of the properties which default value is > depending from ldap type is missing. > 4. Asks for the user credentials and queries Ambari for the existing values > first. Than offer these values as defaults. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (AMBARI-24516) Default value for LDAP type
[ https://issues.apache.org/jira/browse/AMBARI-24516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated AMBARI-24516: Description: 1. Set default value for ldap type 'Generic' 2. command line option for configuring the ldap type 3. Ask for ldap type only if any of the properties which default value is depending from ldap type is missing. 3. Asks for the user credentials and queries Ambari for the existing values first. Than offer these values as defaults. was: 1. Set default value for ldap type 'Generic' 2. command line option for configuring the ldap type > Default value for LDAP type > --- > > Key: AMBARI-24516 > URL: https://issues.apache.org/jira/browse/AMBARI-24516 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.1 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Fix For: 2.7.2 > > > 1. Set default value for ldap type 'Generic' > 2. command line option for configuring the ldap type > 3. Ask for ldap type only if any of the properties which default value is > depending from ldap type is missing. > 3. Asks for the user credentials and queries Ambari for the existing values > first. Than offer these values as defaults. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (AMBARI-24542) Rename LDAP configuration ambari.ldap.advance.collision_behavior
[ https://issues.apache.org/jira/browse/AMBARI-24542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594573#comment-16594573 ] Hudson commented on AMBARI-24542: - SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #9895 (See [https://builds.apache.org/job/Ambari-trunk-Commit/9895/]) AMBARI-24542. Fixing typo in LDAP configuration property name (#2174) (github: [https://gitbox.apache.org/repos/asf?p=ambari.git=commit=17caa686539f164622db7243a503db7fe4349a65]) * (edit) ambari-server/src/main/java/org/apache/ambari/server/upgrade/SchemaUpgradeHelper.java * (edit) ambari-server/src/main/java/org/apache/ambari/server/configuration/AmbariServerConfigurationKey.java * (edit) ambari-server/src/main/python/ambari_server/setupSecurity.py * (add) ambari-server/src/test/java/org/apache/ambari/server/upgrade/UpgradeCatalog272Test.java * (add) ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog272.java * (edit) ambari-server/src/test/python/TestAmbariServer.py > Rename LDAP configuration ambari.ldap.advance.collision_behavior > > > Key: AMBARI-24542 > URL: https://issues.apache.org/jira/browse/AMBARI-24542 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.0, 2.7.1 >Reporter: Sandor Molnar >Assignee: Sandor Molnar >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.2 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > In Ambari 2.7 we moved LDAP configuration into the Ambari DB and introduced > common naming pattern. However a typo has been made in > _'ambari.ldap.*advance*.collision_behavior_'. This should be renamed to > _'ambari.ldap.*advanced*.collision_behavior_' -- This message was sent by Atlassian JIRA (v7.6.3#76005)