Radeity commented on issue #11262:
URL: 
https://github.com/apache/dolphinscheduler/issues/11262#issuecomment-1205251304

   > already
   
   
   
   > > @ruanwenjun Yeh, maybe a practicable solution, we can simply talk about 
it.
   > > Before submitting a yarn job, the client apply the application context 
from RM first, and get appId which will be then written into NM's environment 
variable. We can use java agent to read it before executing yarn job's JAR 
file, also, can take taskInstanceId as input of agent program. However, where 
to store this mapping relationship need to be further considered.
   > > Please let me know if you have any good suggestions!
   > 
   > In fact, there is already a issue(#4025) talk about use agent to collect 
the appId, but I think it isn't a good way 😢 , we need to maintain a agent and 
we may need to maintain different version agant.
   
   I think there's no need to maintain different version agent, for example, we 
can parse the appId from some environment variables such as 
`APPLICATION_WEB_PROXY_BASE`. All yarn jobs' `AM` maintain this environment 
variable, i've already verified it in Flink, Spark, Hive, MR, Spark-SQL. The 
only difference is how to set java options which can be defined in each  type 
of task.
   
   So, it seems like yarn jobs submitted by shell command can all get appId in 
this way. Anyway, there are some other design problems, like where to store the 
mapping relationship, as mentioned in 
issue([#4025](https://github.com/apache/dolphinscheduler/issues/4025)). I'll 
carefully think about that.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to