turboFei commented on PR #22911:
URL: https://github.com/apache/spark/pull/22911#issuecomment-1547294539

   > The main two things that don't need to happen in executors anymore are:
   > adding the Hadoop config to the executor pods: this is not needed
   > since the Spark driver will serialize the Hadoop config and send
   > it to executors when running tasks.
   
   gentle ping @vanzin 
   seems the executor still need hadoop config.
   <img width="1383" alt="image" 
src="https://github.com/apache/spark/assets/6757692/6632c855-1ab6-47ce-a350-0a344bc02e2e";>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to