Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/4998#issuecomment-78893597
  
    I think the difference is that Hadoop profiles build Spark for different, 
but fixed, deployment contexts. This is trying to accommodate two different 
types of _user_ app. This build of Spark won't run apps that "normal" builds 
will, and vice versa (right?). This is why I don't think a build profile helps 
people solve this problem (without creating a bigger one).
    
    I'd really hope there's a flavor of log4j that supports old and new code 
and config; that would be much more ideal. Or, another answer is just that you 
can't use log4j 2 with Spark and need to use slf4j.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to