Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/2935#issuecomment-60460447
  
    Also, adding our own synchronizing wrapper will let us roll back some of 
the complexity introduced by #2684 for ensuring thread-safety, since each task 
will get its own deserialized copy of the configuration.
    
    I suppose that this could have a small performance penalty because we'll 
always construct a new Configuration (which might be expensive), but I think it 
should be pretty minimal (we can try measuring it) and is probably offset by 
other performance improvements in 1.2.
    
    By the way, CONFIGURATION_INSTANTIATION_LOCK should probably be moved to 
SparkHadoopUtil so that it's accessible from more places that might create 
Configurations.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to