[
https://issues.apache.org/jira/browse/AMBARI-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14118726#comment-14118726
]
Alexander Denissov commented on AMBARI-7023:
--------------------------------------------
We will likely use PHD stack with version greater than 2.1, and we do not yet
have Ambari trunk compatible stack (we used 1.6.1 branch for now), so I can't
verify the fix at this time, please apply the change and we will report if we
encounter anything similar again.
> Incorrect ATS metric request for non-HDP stack with version 2.1
> ---------------------------------------------------------------
>
> Key: AMBARI-7023
> URL: https://issues.apache.org/jira/browse/AMBARI-7023
> Project: Ambari
> Issue Type: Bug
> Affects Versions: 1.6.1
> Reporter: Alexander Denissov
> Assignee: Jaimin D Jetly
> Priority: Critical
> Fix For: 1.7.0
>
>
> Define a non-HDP stack based on hadoop 2.2, such as PHD 2.1.0 with
> HDFS+YARN+ZK services.
> After cluster deployment when a user presses "Complete" and UI tries to
> navigate to dashboard, a "Server Error" popup comes up with the message and
> the UI is stuck on loading bar of
> http://c6401.ambari.apache.org:8080/#/main/dashboard/metrics:
> The popup has the following error message:
> 500 status code received on GET method for API:
> /api/v1/clusters/test/components/?ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|ServiceComponentInfo/component_name=JOURNALNODE|ServiceComponentInfo/category=MASTER&fields=ServiceComponentInfo/Version,ServiceComponentInfo/StartTime,ServiceComponentInfo/HeapMemoryUsed,ServiceComponentInfo/HeapMemoryMax,ServiceComponentInfo/service_name,host_components/HostRoles/host_name,host_components/HostRoles/state,host_components/HostRoles/maintenance_state,host_components/HostRoles/stale_configs,host_components/metrics/jvm/memHeapUsedM,host_components/metrics/jvm/HeapMemoryMax,host_components/metrics/jvm/HeapMemoryUsed,host_components/metrics/jvm/memHeapCommittedM,host_components/metrics/mapred/jobtracker/trackers_decommissioned,host_components/metrics/cpu/cpu_wio,host_components/metrics/rpc/RpcQueueTime_avg_time,host_components/metrics/dfs/FSNamesystem/*,host_components/metrics/dfs/namenode/Version,host_components/metrics/dfs/namenode/DecomNodes,host_components/metrics/dfs/namenode/TotalFiles,host_components/metrics/dfs/namenode/UpgradeFinalized,host_components/metrics/dfs/namenode/Safemode,host_components/metrics/runtime/StartTime,host_components/metrics/yarn/Queue,ServiceComponentInfo/rm_metrics/cluster/activeNMcount,ServiceComponentInfo/rm_metrics/cluster/unhealthyNMcount,ServiceComponentInfo/rm_metrics/cluster/rebootedNMcount,ServiceComponentInfo/rm_metrics/cluster/decommissionedNMcount&minimal_response=true
>
> Error message: org.apache.ambari.server.controller.spi.SystemException: An
> internal system exception occurred: Could not find service for component,
> componentName=APP_TIMELINE_SERVER, clusterName=test, stackInfo=PHD-2.1.0
> The problem, I believe is in
> ambari-web/app/controllers/global/update_controller.js lines:
> isATSInstalled =
> App.cache['services'].mapProperty('ServiceInfo.service_name').contains('YARN')
> && App.get('isHadoop21Stack'),
> flumeHandlerParam = isFlumeInstalled ?
> 'ServiceComponentInfo/component_name=FLUME_HANDLER|' : '',
> atsHandlerParam = isATSInstalled ?
> 'ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|' : '',
> and ambari-web/app/app.js lines:
> isHadoop21Stack: function () {
> return
> (stringUtils.compareVersions(this.get('currentStackVersionNumber'), "2.1")
> === 1 ||
> stringUtils.compareVersions(this.get('currentStackVersionNumber'),
> "2.1") === 0)
> }.property('currentStackVersionNumber'),
> Since the stack version number is 2.1 and YARN is installed, the UI assumes
> the stack is Hadoop 2.4 compatible (as is the case with HDP), which is not
> the case with non-HDP stacks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)