Github user stephenh commented on the pull request:

    https://github.com/apache/spark/pull/3725#issuecomment-68071870
  
    Cool, sounds good. FWIW there are few things to do after this gets in:
    
    a) document that if userClassPathFirst=true, then user's uberjar should not 
include any Spark or Scala code (or else they'll get class cast exceptions b/c 
the parent scala.Function will be different from the child scala.Function),
    
    b) either accept Marcelo's PR as-is (which, among other things, applies the 
user-first classloader to driver code) or pull out just the driver part of his 
PR until the rest gets in (I've done this for our local Spark build),
    
    c) as a few others have said, adapt the filtering logic from Jetty/Hadoop 
that will prefer scala.* and org.apache.spark.* (and a few others) from the 
parent classloader all the time, even if the user's uberjar does accidentally 
include them (at this point, the documentation added in a) could be removed).
    
    I included these in order of small -> large, with the idea that, unless 
someone beats me to it (which would be great :-)), I'll progressively work 
through each one.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to