Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/22881
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229803309
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -471,4 +473,42 @@ object SparkHadoopUtil {
hadoopConf.set(key.s
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229802904
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -471,4 +472,33 @@ object SparkHadoopUtil {
hadoopConf.set(key.s
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229577581
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -471,4 +472,33 @@ object SparkHadoopUtil {
hadoopConf.se
Github user xiao-chen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229403073
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -471,4 +473,42 @@ object SparkHadoopUtil {
hadoopConf.set(ke
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229172448
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -471,4 +473,42 @@ object SparkHadoopUtil {
hadoopConf.set(key.s
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229155491
--- Diff: docs/configuration.md ---
@@ -761,6 +761,17 @@ Apart from these, the following properties are also
available, and may be useful
Compression
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229154733
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -471,4 +473,42 @@ object SparkHadoopUtil {
hadoopConf.set(key.su
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229102664
--- Diff: docs/configuration.md ---
@@ -761,6 +761,17 @@ Apart from these, the following properties are also
available, and may be useful
Compressio
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229104197
--- Diff: docs/configuration.md ---
@@ -761,6 +761,17 @@ Apart from these, the following properties are also
available, and may be useful
Compressio
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229103471
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -26,11 +27,12 @@ import scala.collection.JavaConverters._
import scal
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22881#discussion_r229102457
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -471,4 +473,42 @@ object SparkHadoopUtil {
hadoopConf.set(key.s
GitHub user squito opened a pull request:
https://github.com/apache/spark/pull/22881
[SPARK-25855][CORE] Don't use erasure coding for event logs by default
## What changes were proposed in this pull request?
This turns off hdfs erasure coding by default for event logs, regar
13 matches
Mail list logo