Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19861#discussion_r154503549
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -732,3 +743,25 @@ class DataFrameReader private[sql](sparkSession:
SparkSession) extends Logging {
private val extraOptions = new scala.collection.mutable.HashMap[String,
String]
}
+
+private[sql] object DataFrameReader {
+
+ /**
+ * Helper method to filter session configs with config key that matches
at least one of the given
+ * prefixes.
+ *
+ * @param cs the config key-prefixes that should be filtered.
+ * @param conf the session conf
+ * @return an immutable map that contains all the session configs that
should be propagated to
+ * the data source.
+ */
+ def withSessionConfig(
--- End diff --
These helper functions need to be moved to
org.apache.spark.sql.execution.datasources.v2 package. This will be called by
SQL API code path.
Another more straightforward option is to provide it by `ConfigSupport`.
WDYT? cc @cloud-fan
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]