In my use case we use spark.dynamicAllocation as a way to remove a knob (--num-executors) in our attempt to become knobless; when running in batch mode it will create the SourceRDDs and based on the number of partitions it will try to spin up that many executors. This completely backfires when the SourceRDD is partitioned based on defaultParallelism because that will now be equal to 2 (default --num-executors).
If you prefer we could prevent the bundleSize from being a knob and always use 64MB (Apache Hadoop default block size). I understand why streaming acts in this way, but for batch the users are going to have to guess how many executors they need. If they do not guess high enough it is entirely possibly to end up with >2GB of data in a partition (https://issues.apache.org/jira/browse/SPARK-6235). Starting at 64MB per partition does not eliminate this possibility but it does reduce the chances. For example if a user read a 10GB file with 1 executor it would fail if it ever tried to cache the partition, but by breaking it into 64MB partitions it has a chance of succeeding (depending on executor memory, etc.). [ Full content available at: https://github.com/apache/beam/pull/6181 ] This message was relayed via gitbox.apache.org for [email protected]
