Github user rahulsinghaliitd commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1067#discussion_r13741193
  
    --- Diff: core/src/main/scala/org/apache/spark/metrics/sink/CsvSink.scala 
---
    @@ -53,11 +53,14 @@ private[spark] class CsvSink(val property: Properties, 
val registry: MetricRegis
         case None => CSV_DEFAULT_DIR
       }
     
    +  val file= new File(pollDir + conf.get("spark.app.uniqueName"))
    --- End diff --
    
    Hi @jerryshao ,
    
    1. At the moment the Sinks are being created, SparkEnv has not been 
created. I may be able to modify the Properties being passed to this Sink or 
even get the SparkConf from SecurityManager. But neither of those approaches 
seems generic to me. For e.g. we will need hadoopConf if we wanted the csv 
directory to be on HDFS.
    
    2. Thanks for pointing out the problem with Master and Worker. I have for 
now added app names to these classes. Please let me know if you think adding 
null checks in CsvSink would also be useful.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to