Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13773#discussion_r67782600
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -52,29 +52,44 @@ private[sql] object JDBCRelation {
* @return an array of partitions with where clause for each partition
*/
def columnPartition(partitioning: JDBCPartitioningInfo):
Array[Partition] = {
- if (partitioning == null) return Array[Partition](JDBCPartition(null,
0))
+ if (partitioning == null || partitioning.numPartitions <= 1 ||
+ partitioning.lowerBound == partitioning.upperBound) {
+ return Array[Partition](JDBCPartition(null, 0))
+ }
- val numPartitions = partitioning.numPartitions
- val column = partitioning.column
- if (numPartitions == 1) return Array[Partition](JDBCPartition(null, 0))
+ val lowerBound = partitioning.lowerBound
+ val upperBound = partitioning.upperBound
+ require (lowerBound <= upperBound,
+ "Operation not allowed: the lower bound of partitioning column is
larger than the upper " +
+ s"bound. Lower bound: $lowerBound; Upper bound: $upperBound")
+
+ val numPartitions =
+ if ((upperBound - lowerBound) >= partitioning.numPartitions) {
+ partitioning.numPartitions
+ } else {
+ upperBound - lowerBound
+ }
// Overflow and silliness can happen if you subtract then divide.
// Here we get a little roundoff, but that's (hopefully) OK.
- val stride: Long = (partitioning.upperBound / numPartitions
- - partitioning.lowerBound / numPartitions)
+ val stride: Long = upperBound / numPartitions - lowerBound /
numPartitions
+ // The automatic adjustment of numPartitions can ensure the following
checking condition.
+ assert(stride >= 1, "The specified number of partitions should be
greater than " +
--- End diff --
This `assert` is useless. Just the extra checking. We already guarantee
that the value is always larger than 0 by the changes made by this PR:
https://github.com/gatorsmile/spark/blob/da3720b7a70949e09c3562e6f3a168a690243b6c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala#L66-L71
If the `stride` value is zero, it is pretty dangerous. We are generating
useless partitions. That is the issue 2 we are trying to resolve:
```
Partition 0: id < 1 or id is null
Partition 1: id >= 1 AND id < 1
Partition 2: id >= 1 AND id < 1
Partition 3: id >= 1 AND id < 1
Partition 4: id >= 1 AND id < 1
Partition 5: id >= 1 AND id < 1
Partition 6: id >= 1 AND id < 1
Partition 7: id >= 1 AND id < 1
Partition 8: id >= 1 AND id < 1
Partition 9: id >= 1
```
I think your idea is good. We should log a warning message. Let me do it.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]