[GitHub] [spark] maropu commented on a change in pull request #28853: [SPARK-32019][SQL] Add spark.sql.files.minPartitionNum config

2020-06-19 Thread GitBox


maropu commented on a change in pull request #28853:
URL: https://github.com/apache/spark/pull/28853#discussion_r443085104



##
File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
##
@@ -528,6 +528,41 @@ class FileSourceStrategySuite extends QueryTest with 
SharedSparkSession with Pre
 }
   }
 
+  test("SPARK-32019: Add spark.sql.files.minPartitionNum config") {
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "1") {
+  val table =
+createTable(files = Seq(
+  "file1" -> 1,
+  "file2" -> 1,
+  "file3" -> 1
+))
+  assert(table.rdd.partitions.length == 1)
+}
+
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "10") {
+  val table =
+createTable(files = Seq(
+  "file1" -> 1,
+  "file2" -> 1,
+  "file3" -> 1
+))
+  assert(table.rdd.partitions.length == 3)
+}
+
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "16") {
+  val partitions = (1 to 100).map(i => s"file$i" -> 128*1024*1024)
+  val table = createTable(files = partitions)
+  // partition is limit by filesMaxPartitionBytes(128MB)
+  assert(table.rdd.partitions.length == 100)
+}
+
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "32") {
+  val partitions = (1 to 800).map(i => s"file$i" -> 4*1024*1024)

Review comment:
   ditto





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] maropu commented on a change in pull request #28853: [SPARK-32019][SQL] Add spark.sql.files.minPartitionNum config

2020-06-19 Thread GitBox


maropu commented on a change in pull request #28853:
URL: https://github.com/apache/spark/pull/28853#discussion_r443085083



##
File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
##
@@ -528,6 +528,41 @@ class FileSourceStrategySuite extends QueryTest with 
SharedSparkSession with Pre
 }
   }
 
+  test("SPARK-32019: Add spark.sql.files.minPartitionNum config") {
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "1") {
+  val table =
+createTable(files = Seq(
+  "file1" -> 1,
+  "file2" -> 1,
+  "file3" -> 1
+))
+  assert(table.rdd.partitions.length == 1)
+}
+
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "10") {
+  val table =
+createTable(files = Seq(
+  "file1" -> 1,
+  "file2" -> 1,
+  "file3" -> 1
+))
+  assert(table.rdd.partitions.length == 3)
+}
+
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "16") {
+  val partitions = (1 to 100).map(i => s"file$i" -> 128*1024*1024)
+  val table = createTable(files = partitions)
+  // partition is limit by filesMaxPartitionBytes(128MB)

Review comment:
   nit: limit -> limited





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] maropu commented on a change in pull request #28853: [SPARK-32019][SQL] Add spark.sql.files.minPartitionNum config

2020-06-19 Thread GitBox


maropu commented on a change in pull request #28853:
URL: https://github.com/apache/spark/pull/28853#discussion_r443085037



##
File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
##
@@ -528,6 +528,41 @@ class FileSourceStrategySuite extends QueryTest with 
SharedSparkSession with Pre
 }
   }
 
+  test("SPARK-32019: Add spark.sql.files.minPartitionNum config") {
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "1") {
+  val table =
+createTable(files = Seq(
+  "file1" -> 1,
+  "file2" -> 1,
+  "file3" -> 1
+))
+  assert(table.rdd.partitions.length == 1)
+}
+
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "10") {
+  val table =
+createTable(files = Seq(
+  "file1" -> 1,
+  "file2" -> 1,
+  "file3" -> 1
+))
+  assert(table.rdd.partitions.length == 3)
+}
+
+withSQLConf(SQLConf.FILES_MIN_PARTITION_NUM.key -> "16") {
+  val partitions = (1 to 100).map(i => s"file$i" -> 128*1024*1024)

Review comment:
   nit: `128*1024*1024` -> `128 * 1024 * 1024`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] maropu commented on a change in pull request #28853: [SPARK-32019][SQL] Add spark.sql.files.minPartitionNum config

2020-06-19 Thread GitBox


maropu commented on a change in pull request #28853:
URL: https://github.com/apache/spark/pull/28853#discussion_r443084921



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##
@@ -1176,6 +1176,15 @@ object SQLConf {
 .longConf
 .createWithDefault(4 * 1024 * 1024)
 
+  val FILES_MIN_PARTITION_NUM = buildConf("spark.sql.files.minPartitionNum")
+.doc("The suggested (not guaranteed) minimum number of splitting file 
partitions. " +

Review comment:
   splitting? split?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] maropu commented on a change in pull request #28853: [SPARK-32019][SQL] Add spark.sql.files.minPartitionNum config

2020-06-18 Thread GitBox


maropu commented on a change in pull request #28853:
URL: https://github.com/apache/spark/pull/28853#discussion_r442578498



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##
@@ -1176,6 +1176,15 @@ object SQLConf {
 .longConf
 .createWithDefault(4 * 1024 * 1024)
 
+  val FILES_MIN_PARTITION_NUM = buildConf("spark.sql.files.minPartitionNum")
+.doc("The suggested (not guaranteed) minimum number of file split 
partitions. If not set, " +

Review comment:
   `file split` -> `split file`?

##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##
@@ -1176,6 +1176,15 @@ object SQLConf {
 .longConf
 .createWithDefault(4 * 1024 * 1024)
 
+  val FILES_MIN_PARTITION_NUM = buildConf("spark.sql.files.minPartitionNum")
+.doc("The suggested (not guaranteed) minimum number of file split 
partitions. If not set, " +
+  "the default value is the default parallelism of the Spark cluster. This 
configuration is " +

Review comment:
   `the default parallelism of the Spark cluster` -> 
`spark.default.parallelism`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] maropu commented on a change in pull request #28853: [SPARK-32019][SQL] Add spark.sql.files.minPartitionNum config

2020-06-18 Thread GitBox


maropu commented on a change in pull request #28853:
URL: https://github.com/apache/spark/pull/28853#discussion_r442037234



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##
@@ -1176,6 +1176,14 @@ object SQLConf {
 .longConf
 .createWithDefault(4 * 1024 * 1024)
 
+  val FILES_MIN_PARTITION_NUM = buildConf("spark.sql.files.minPartitionNum")
+.doc("The suggested (not guaranteed) minimum number of file split 
partitions. If not set, " +
+  "the default value is the default parallelism of the Spark cluster. This 
configuration is " +
+  "effective only when using file-based sources such as Parquet, JSON and 
ORC.")
+.version("3.1.0")
+.intConf

Review comment:
   Could you add `checkValue`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org