Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12887#discussion_r62148622
--- Diff: R/pkg/R/DataFrame.R ---
@@ -570,10 +570,17 @@ setMethod("unpersist",
#' Repartition
#'
-#' Return a new SparkDataFrame that has exactly numPartitions partitions.
-#'
+#' The following options for repartition are possible:
+#' \itemize{
+#' \item{"Option 1"} {Return a new SparkDataFrame partitioned by
+#' the given columns into `numPartitions`.}
+#' \item{"Option 2"} {Return a new SparkDataFrame that has exactly
`numPartitions`.}
+#' \item{"Option 3"} {Return a new SparkDataFrame partitioned by the
given columns,
+#' preserving the existing number of partitions.}
--- End diff --
Yes, @shivaram , I did refer to scala doc in Dataset.scala.
I the reality for the case - repartition(col1, col2) - internally in the
logical plan `spark.sql.shuffle.partitions` is being used
```
/**
* This method repartitions data using [[Expression]]s into
`numPartitions`, and receives
* information about the number of partitions during execution. Used when a
specific ordering or
* distribution is expected by the consumer of the query result. Use
[[Repartition]] for RDD-like
* `coalesce` and `repartition`.
* If `numPartitions` is not specified, the number of partitions will be
the number set by
* `spark.sql.shuffle.partitions`.
*/
case class RepartitionByExpression(
partitionExpressions: Seq[Expression],
child: LogicalPlan,
numPartitions: Option[Int] = None) extends RedistributeData {
numPartitions match {
case Some(n) => require(n > 0, "numPartitions must be greater than 0.")
case None => // Ok
}
}
```
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/partitioning.scala#L38
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]