Hello all As an outsider, I don't fully understand this discussion. This particular configuration option "leaked" into the open-source Spark distribution, and now there is a lot of discussion about how to mitigate existing workloads. But: presumably the people who are depending on this configuration flag are already using a downstream (vendor-specific) fork, and a future update will similarly be distributed by that downstream provider.
Which people a) made a workflow using the vendor fork and b) want to resume it in the OSS version of spark? It seems like the people who are affected by this will already be using someone else's fork, and there's no need to carry this patch in the mainline Spark code. For that reason, I believe the code should be dropped by OSS Spark, and vendors who need to mitigate it can push the appropriate changes to their downstreams. Thanks Andrew --------------------------------------------------------------------- To unsubscribe e-mail: dev-unsubscr...@spark.apache.org