[ 
https://issues.apache.org/jira/browse/AMBARI-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14111692#comment-14111692
 ] 

Jaimin D Jetly commented on AMBARI-7023:
----------------------------------------

[~adenisso]
Can you please apply the attached patch and try out PHD 2.1.0 stack with 
HDFS+YARN+ZK services ?
This will help in verifying that the patch addresses this issue correctly.


> Incorrect ATS metric request for non-HDP stack with version 2.1
> ---------------------------------------------------------------
>
>                 Key: AMBARI-7023
>                 URL: https://issues.apache.org/jira/browse/AMBARI-7023
>             Project: Ambari
>          Issue Type: Bug
>    Affects Versions: 1.6.1
>            Reporter: Alexander Denissov
>            Assignee: Jaimin D Jetly
>            Priority: Critical
>             Fix For: 1.7.0
>
>
> Define a non-HDP stack based on hadoop 2.2, such as PHD 2.1.0 with 
> HDFS+YARN+ZK services.
> After cluster deployment when a user presses "Complete" and UI tries to 
> navigate to dashboard, a "Server Error" popup comes up with the message and 
> the UI is stuck on loading bar of 
> http://c6401.ambari.apache.org:8080/#/main/dashboard/metrics:
> The popup has the following error message:
> 500 status code received on GET method for API: 
> /api/v1/clusters/test/components/?ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|ServiceComponentInfo/component_name=JOURNALNODE|ServiceComponentInfo/category=MASTER&fields=ServiceComponentInfo/Version,ServiceComponentInfo/StartTime,ServiceComponentInfo/HeapMemoryUsed,ServiceComponentInfo/HeapMemoryMax,ServiceComponentInfo/service_name,host_components/HostRoles/host_name,host_components/HostRoles/state,host_components/HostRoles/maintenance_state,host_components/HostRoles/stale_configs,host_components/metrics/jvm/memHeapUsedM,host_components/metrics/jvm/HeapMemoryMax,host_components/metrics/jvm/HeapMemoryUsed,host_components/metrics/jvm/memHeapCommittedM,host_components/metrics/mapred/jobtracker/trackers_decommissioned,host_components/metrics/cpu/cpu_wio,host_components/metrics/rpc/RpcQueueTime_avg_time,host_components/metrics/dfs/FSNamesystem/*,host_components/metrics/dfs/namenode/Version,host_components/metrics/dfs/namenode/DecomNodes,host_components/metrics/dfs/namenode/TotalFiles,host_components/metrics/dfs/namenode/UpgradeFinalized,host_components/metrics/dfs/namenode/Safemode,host_components/metrics/runtime/StartTime,host_components/metrics/yarn/Queue,ServiceComponentInfo/rm_metrics/cluster/activeNMcount,ServiceComponentInfo/rm_metrics/cluster/unhealthyNMcount,ServiceComponentInfo/rm_metrics/cluster/rebootedNMcount,ServiceComponentInfo/rm_metrics/cluster/decommissionedNMcount&minimal_response=true
>  
> Error message: org.apache.ambari.server.controller.spi.SystemException: An 
> internal system exception occurred: Could not find service for component, 
> componentName=APP_TIMELINE_SERVER, clusterName=test, stackInfo=PHD-2.1.0 
> The problem, I believe is in 
> ambari-web/app/controllers/global/update_controller.js lines:
> isATSInstalled = 
> App.cache['services'].mapProperty('ServiceInfo.service_name').contains('YARN')
>  && App.get('isHadoop21Stack'),
>       flumeHandlerParam = isFlumeInstalled ? 
> 'ServiceComponentInfo/component_name=FLUME_HANDLER|' : '',
>       atsHandlerParam = isATSInstalled ? 
> 'ServiceComponentInfo/component_name=APP_TIMELINE_SERVER|' : '',
> and ambari-web/app/app.js lines:
> isHadoop21Stack: function () {
>     return 
> (stringUtils.compareVersions(this.get('currentStackVersionNumber'), "2.1") 
> === 1 ||
>       stringUtils.compareVersions(this.get('currentStackVersionNumber'), 
> "2.1") === 0)
>   }.property('currentStackVersionNumber'),
> Since the stack version number is 2.1 and YARN is installed, the UI assumes 
> the stack is Hadoop 2.4 compatible (as is the case with HDP), which is not 
> the case with non-HDP stacks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to