wangyum opened a new pull request #34810:
URL: https://github.com/apache/spark/pull/34810
### What changes were proposed in this pull request?
This pr add support set parallel through data source properties when reading
data. For example:
```scala
spark.read.option("Parallel", 10).parquet("path/to/parquet")
```
```sql
CREATE TABLE very_large_partitioned_bucketed_table (
id STRING,
foo STRING,
bar STRING,
other STRING,
dt STRING,
type STRING)
USING parquet
OPTIONS (
compression 'gzip',
PARALLEL '1000'
)
PARTITIONED BY (dt, type)
CLUSTERED BY (id)
INTO 6000 BUCKETS
```
Oracle has similar feature:
https://docs.oracle.com/cd/B19306_01/server.102/b14200/clauses006.htm
https://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm#BEIDFDEH
### Why are the changes needed?
1. To decrease the degree of parallelism if it is very large partitioned and
bucketed table as it is not always use bucket scan since
[SPARK-32859](https://issues.apache.org/jira/browse/SPARK-32859).
2. To increase the degree of parallelism on the stream side if it is
`BroadcastNestedLoopJoinExec`.
3. To support setting parallel through hint in the future(Oracle has similar
feature:
https://docs.oracle.com/cd/E11882_01/server.112/e41573/hintsref.htm#CHDJIGDG).
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unit test.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]