[
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16319443#comment-16319443
]
Jian He commented on YARN-7605:
-------------------------------
bq. Life time remaining calculated from RM doesn't appear to be real time. When
3600 is set during application launching, the value seems to stay constant.
When retrieving it via ApplicationTimeout, and the service is already stopped,
the value does not change. Hence, retrieving the value from RM and merge with
the service object doesn't compute the time more accurately. AM copy of service
object contains the same initial life time value because service object from AM
contains the initial life time value set during service creation. While I am
not an expert in ApplicationTimeout code, but I haven't found the time ever
changed from it's original value. I will put the code back in the next patch in
case this was result of bug in ApplicationTimeout code.
It should not stay constant, the remaining time indicates how much time the app
is left to run. It is constantly updated. The ServiceClient gets the remaining
time from RM via the application report. Last time I checked, it is working
properly. Did you test that this stay constant ?
bq. Back to the comment about getStatus structure, do you still want the
returned value of stopped service to be partial information, or similar to
running application?
If hdfs unstable can cause RM unstable, that sort of cluster downtime issue
sounds more critical to me than this partial information. Because this endpoint
can be very frequently called while app is accepted (client likes to poll every
second or so to wait for app is running), that essentially means RM will hit
HDFS for every app getStatus call before it gets running. Unless a concrete use
case is asked for a complete information while app is accepted or completed, I
prefer adding this later with a proper caching implementation built. Just my
opinion, [~gsaha] , [[email protected]] ?
> Implement doAs for Api Service REST API
> ---------------------------------------
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Eric Yang
> Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch,
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch,
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch,
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch,
> YARN-7605.014.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use
> REST API instead of having direct file system and resource manager rpc calls.
> This change helped to centralize yarn metadata to be owned by yarn user
> instead of crawling through every user's home directory to find metadata.
> The next step is to make sure "doAs" calls work properly for API Service.
> The metadata is stored by YARN user, but the actual workload still need to be
> performed as end users, hence API service must authenticate end user kerberos
> credential, and perform doAs call when requesting containers via
> ServiceClient.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]