Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/852#issuecomment-44074164
The only reason to change it is that it doesn't follow people's
expectations, or at least my expectations. I always look first into stdout
for progress/logging of the job, and then remember that everything is
written into stderr regardless of its log level. It's pretty unusual to
see DEBUG and INFO level logs going into a file named stderr.
If you think the continuity break of changing the logging will jar existing
Spark users then we can keep it as is, but I think it's a worthwhile change
and more closely matches my expectations of how log4j log levels should map
to stdout and stderr.
On Fri, May 23, 2014 at 5:08 PM, Matei Zaharia
<[email protected]>wrote:
> This behavior has been in there for a while, so I'm curious, is there a
> strong reason to change it? It would be a change in behavior that some
> users might not expect. Users can always configure their own
> log4j.properties if they don't want this one.
>
> â
> Reply to this email directly or view it on
GitHub<https://github.com/apache/spark/pull/852#issuecomment-44060793>
> .
>
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---