Github user ajbozarth commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11326#discussion_r57625677
  
    --- Diff: 
core/src/main/scala/org/apache/spark/status/api/v1/ApplicationListResource.scala
 ---
    @@ -69,6 +69,8 @@ private[spark] object ApplicationsListResource {
           attempts = app.attempts.map { internalAttemptInfo =>
             new ApplicationAttemptInfo(
               attemptId = internalAttemptInfo.attemptId,
    +          startTimeEpoch = internalAttemptInfo.startTime,
    --- End diff --
    
    Thats ok, and I originally thought the same thing about the redundancy and 
getter methods. I even spent a couple hours this morning combing through the 
related api code to double check my understanding.
    
    From my understanding of the api (at least ```ApplicationInfo``` and 
```ApplicationAttemptInfo```), the api classes are created when the api is 
called using these functions I edited. The returned classes are then 
automatically converted to json, so there is no getters for the classes or 
place to write them. The only data converted to json is the data given in 
params.
    
    I don't fully understand the original choice for the api classes to work 
this way (no methods) but, unless I am missing something, adding the extra 
```Epoch``` values as params is the only way to pass the ms info to the api 
json output. Also as I mentioned before, it seems the values are only stored in 
the class for the time it take to process the api call.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to