Github user mihir6692 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12697#discussion_r61161998
  
    --- Diff: conf/log4j.properties.template ---
    @@ -38,3 +38,14 @@ log4j.logger.parquet=ERROR
     # SPARK-9183: Settings to avoid annoying messages when looking up 
nonexistent UDFs in SparkSQL with Hive support
     log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
     log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
    +
    +# SPARK-14754: Metrics as logs are not coming through slf4j.
    +#log4j.logger.org.apache.spark.metrics=INFO, metricFileAppender
    +#log4j.additivity.org.apache.spark.metrics=true
    +
    +#log4j.appender.metricFileAppender=org.apache.log4j.RollingFileAppender
    +#log4j.appender.metricFileAppender.File=${logFilePath}
    +#log4j.appender.metricFileAppender.MaxFileSize=10MB
    --- End diff --
    
    For Slf4jSink.scala :-
    
    Its not about class path or class level in package. It is just a name. Ex.
    If you keep name like `Spark.log4j`, it would still work. ( and use the
    same `Spark.log4j` in log4j.properties).  So it won't matter even if we
    move class to some other folder or package.
    
    For log4j.properties :-
    
    Main reason to use new appenders is to get metrics in separate file which
    will reduce complexity if someone one wants to parse metric.
    
    It would be better to use new appender so when you disable root logger ,
    you will only disable application logs, not the metrics logs. ( for more
    detailed and through explanation :- http://stackoverflow.com/a/23323046 )
    
    On Tue, Apr 26, 2016 at 4:27 PM, Sean Owen <[email protected]> wrote:
    
    > In conf/log4j.properties.template
    > <https://github.com/apache/spark/pull/12697#discussion_r61066147>:
    >
    > > @@ -38,3 +38,14 @@ log4j.logger.parquet=ERROR
    > >  # SPARK-9183: Settings to avoid annoying messages when looking up 
nonexistent UDFs in SparkSQL with Hive support
    > >  log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
    > >  log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
    > > +
    > > +# SPARK-14754: Metrics as logs are not coming through slf4j.
    > > +#log4j.logger.org.apache.spark.metrics=INFO, metricFileAppender
    > > +#log4j.additivity.org.apache.spark.metrics=true
    > > +
    > > +#log4j.appender.metricFileAppender=org.apache.log4j.RollingFileAppender
    > > +#log4j.appender.metricFileAppender.File=${logFilePath}
    > > +#log4j.appender.metricFileAppender.MaxFileSize=10MB
    >
    > I still don't think this has been addressed. It shouldn't need a new
    > appender, right?
    >
    > —
    > You are receiving this because you authored the thread.
    > Reply to this email directly or view it on GitHub
    > 
<https://github.com/apache/spark/pull/12697/files/5a437a12fb0c6cfdd7119d63bebc3f3f2c935d5f#r61066147>
    >
    
    
    
    -- 
    Mihir Monani
    (+91)-9429473434



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to