ueshin opened a new pull request, #41148:
URL: https://github.com/apache/spark/pull/41148
### What changes were proposed in this pull request?
This is a follow-up of #40575.
Disables JVM stack trace by default.
```py
% ./bin/pyspark --remote local
...
>>> spark.conf.set("spark.sql.ansi.enabled", True)
>>> spark.sql('select 1/0').show()
...
Traceback (most recent call last):
...
pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO]
Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL
instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this
error.
== SQL(line 1, position 8) ==
select 1/0
^^^
>>>
>>> spark.conf.set("spark.sql.pyspark.jvmStacktrace.enabled", True)
>>> spark.sql('select 1/0').show()
...
Traceback (most recent call last):
...
pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO]
Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL
instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this
error.
== SQL(line 1, position 8) ==
select 1/0
^^^
JVM stacktrace:
org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by
zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select 1/0
^^^
at
org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:226)
at
org.apache.spark.sql.catalyst.expressions.DivModLike.eval(arithmetic.scala:674)
...
```
### Why are the changes needed?
Currently JVM stack trace is enabled by default.
```py
% ./bin/pyspark --remote local
...
>>> spark.conf.set("spark.sql.ansi.enabled", True)
>>> spark.sql('select 1/0').show()
...
Traceback (most recent call last):
...
pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO]
Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL
instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this
error.
== SQL(line 1, position 8) ==
select 1/0
^^^
JVM stacktrace:
org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by
zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
== SQL(line 1, position 8) ==
select 1/0
^^^
at
org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:226)
at
org.apache.spark.sql.catalyst.expressions.DivModLike.eval(arithmetic.scala:674)
...
```
### Does this PR introduce _any_ user-facing change?
Users won't see the JVM stack trace by default.
### How was this patch tested?
Existing tests.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]