Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/7285#issuecomment-119625401
Ah, I think this may have to be a check higher up, on the argument to
`repartition`? this looks too low level. An RDD with 0 partitions is OK, just
not repartitioning a (non-empty) RDD to 0 partitions.
```
[info] - zero-partition RDD *** FAILED *** (22 milliseconds)
[info] java.lang.IllegalArgumentException: requirement failed: Number of
partitions (0) must be positive.
[info] at scala.Predef$.require(Predef.scala:233)
[info] at org.apache.spark.HashPartitioner.<init>(Partitioner.scala:79)
[info] at
org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
[info] at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290)
[info] at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290)
[info] at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
[info] at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
[info] at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
[info] at
org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:289)
[info] at
org.apache.spark.rdd.PairRDDFunctionsSuite$$anonfun$27.apply$mcV$sp(PairRDDFunctionsSuite.scala:388)
[info] at
org.apache.spark.rdd.PairRDDFunctionsSuite$$anonfun$27.apply(PairRDDFunctionsSuite.scala:381)
[info] at
org.apache.spark.rdd.PairRDDFunctionsSuite$$anonfun$27.apply(PairRDDFunctionsSuite.scala:381)
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]