Hi all, While working on some seemingly unrelated code, I ran into this issue where "spark.hadoop.*" configs were not making it to the Configuration objects in some parts of the code. I was trying to do that to avoid having to do dirty ticks with the classpath while running tests, but that's a little besides the point.
Since I don't know the history of that code in SparkContext, does anybody see any issue with moving it up a layer so that all code that uses SparkHadoopUtil.newConfiguration() does the same thing? This would also include some code (e.g. in the yarn module) that does "new Configuration()" directly instead of going through the wrapper. -- Marcelo