This will cause the [Partitioner.defaultPartitioner](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/Partitioner.scala#L62) to be used. When called on a SourceRDD this should be a HashPartitioner with the number of partitions equal to the number of splits created by the bundleSize. When called on a SourceDStream this should be a HashPartitioner with the number of partitions equal to the defaultParallelism.
[ Full content available at: https://github.com/apache/beam/pull/6181 ] This message was relayed via gitbox.apache.org for [email protected]
