Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18861#discussion_r131766797
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2662,6 +2662,30 @@ class Dataset[T] private[sql](
}
/**
+ * Returns a new Dataset that an user-defined `PartitionCoalescer`
reduces into fewer partitions.
+ * `userDefinedCoalescer` is the same with a coalescer used in the `RDD`
coalesce function.
+ *
+ * If a larger number of partitions is requested, it will stay at the
current
+ * number of partitions. Similar to coalesce defined on an `RDD`, this
operation results in
+ * a narrow dependency, e.g. if you go from 1000 partitions to 100
partitions, there will not
+ * be a shuffle, instead each of the 100 new partitions will claim 10 of
the current partitions.
+ *
+ * However, if you're doing a drastic coalesce, e.g. to numPartitions =
1,
+ * this may result in your computation taking place on fewer nodes than
+ * you like (e.g. one node in the case of numPartitions = 1). To avoid
this,
+ * you can call repartition. This will add a shuffle step, but means the
+ * current upstream partitions will be executed in parallel (per whatever
+ * the current partitioning is).
+ *
+ * @group typedrel
+ * @since 2.3.0
+ */
+ def coalesce(numPartitions: Int, userDefinedCoalescer:
Option[PartitionCoalescer])
+ : Dataset[T] = withTypedPlan {
--- End diff --
```Scala
def coalesce(
numPartitions: Int,
userDefinedCoalescer: Option[PartitionCoalescer]): Dataset[T] =
withTypedPlan {
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]