[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317647#comment-16317647
 ] 

Jian He commented on YARN-7605:
-------------------------------

bq. I added a check to return EXIT_NOT_FOUND in actionDestroy, if the file is 
not found in HDFS, it will return the proper application not found information.
still the logic that "result != 0" means application is not found is fragile. 
The next time user added a new exit code, this error message is broken. And 
currently it is inconsistent between CLI and rest API. CLI is ignoring the 
error code whereas rest API is throwing exception.
It's better to make them consistent. And the Exception that ApplicationNotFound 
is confusing, ApplicationNotFound usually means not found in RM, but here it is 
the app folder not existing in hdfs. I think we can return the fact that the 
app doesn't exist hdfs to be explicit, like the CLI does,  instead of throwing 
exception.
bq. I refactor the code to load only if RM doesn't find the app.
IIUC,  the code is load if app is not running. This can be that app is pending 
or finished, it may still hit hdfs a lot. For getStatus kinda API, the consumer 
is always calling it in a loop. This will get very bad if hdfs is down or 
during failover, the api call will be spinning and quickly overload RM. IMO, we 
may need a better solution for serving persistent app status. The current 
approach may create a bottleneck accidentally. 
bq. Redundant fetch of information. If it is running, remaining time is in the 
copy retrieved from AM.
How is it retrieving it from AM ? Only RM knows the app's remaining time
bq. Patch failed due to revert of YARN-7540. Please commit YARN-7540 so this 
patch can be applied. Thank you.
Once this patch review is done, we can combine both patches and run jenkins and 
commit together.

> Implement doAs for Api Service REST API
> ---------------------------------------
>
>                 Key: YARN-7605
>                 URL: https://issues.apache.org/jira/browse/YARN-7605
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>             Fix For: yarn-native-services
>
>         Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to