xinrong-meng opened a new pull request, #37777:
URL: https://github.com/apache/spark/pull/37777
### What changes were proposed in this pull request?
Introduce `sql_conf` context manager for `pyspark.sql`.
### Why are the changes needed?
That simplifies the control of Spark SQL configuration as below
from
```py
original_value = spark.conf.get("key")
spark.conf.set("key", "value")
...
spark.conf.set("key", original_value)
```
to
```py
with sql_conf({"key": "value"}):
...
```
[Here](https://github.com/apache/spark/blob/master/python/pyspark/pandas/utils.py#L490)
is such a context manager isĀ in Pandas API on Spark.
We should introduce one in `pyspark.sql`, and deduplicate code if possible.
### Does this PR introduce _any_ user-facing change?
Yes. Users may use the context manager to manage the Spark SQL configuration
for a code block.
### How was this patch tested?
Unit tests.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]