Github user mallman commented on the issue:
https://github.com/apache/spark/pull/18269
@bbossy I've built and deployed a branch of Spark 2.2 with your patch and
compared its behavior to the same branch of Spark 2.2 without your patch. I'm
seeing different behavior, but not what I expected.
My test table has three partition columns, `ds`, `h` and `chunk`. There are
1 `ds` values, 2 `h` values, and 51 `chunk` values, split into 27 and 24
partitions under the two `h` directories. I set
`spark.sql.sources.parallelPartitionDiscovery.threshold` to 10. I believe this
fits the scenario you're trying to remedy.
I use `spark.read.parquet` to load the table. When I load the table using
the unpatched branch, Spark launches three jobs with 27, 24 and 1 stages, in
that order. When I load the table using the patched branch, Spark launches
three jobs with 51, 1 and 51 stages, in that order. Does this match your
expectations? I was expecting to see Spark launch two jobs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]