Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229803309
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -471,4 +473,42 @@ object SparkHadoopUtil {
hadoopConf.set(key.substring("spark.hadoop.".length), value)
}
}
+
+
+ lazy val builderReflection: Option[(Class[_], Method, Method)] = Try {
+ val cls = Utils.classForName(
+
"org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder")
+ (cls, cls.getMethod("replicate"), cls.getMethod("build"))
+ }.toOption
+
+ // scalastyle:off line.size.limit
+ /**
+ * Create a path that uses replication instead of erasure coding,
regardless of the default
+ * configuration in hdfs for the given path. This can be helpful as
hdfs ec doesn't support
--- End diff --
"ec" is already explained in the line above. no need to repeat it.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]