zhouyifan279 commented on PR #41105:
URL: https://github.com/apache/spark/pull/41105#issuecomment-1543275831

   > don't remember the cache stuff, sorry. I know we have a problem in hadoop 
where cache lookup can trigger >1 fs creation and the same time, and if that is 
slow then at best: needless work, at worst: conflict and sometimes failures. so 
we use a semaphore to limit the #of threads which can create a new FS at the 
same time (HADOOP-17313). spark/tez workers and cloud stores doing network IO 
in initialize() are the troublespot here, FWIW.
   
   @steveloughran Thanks for your inputs.
   
   In SparkHistoryServer, SparkUIs are tracked by <appId, attemptId> in many 
places:
   1. `ApplicationCache#appCache`
   2. `FsHistoryProvider#activeUIs`
   3. `HistoryServerDiskManager#active`
   4. Disk-based KVStore backend local path: appStoreDir/<appId>_<attemptId>/
   
   All the above places assumes there is only one SparkUI loaded for one 
<appId, attemptId> pair.
   Guava LoadingCache can ensure this when SparkUIs are guarded by its locks.
   But SparkUI's detaching is not guarded by LoadingCache's lock.  
   
   This PR is aimed to ensure this from SparkUI's loading to detaching.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to