cloud-fan commented on code in PR #46789:
URL: https://github.com/apache/spark/pull/46789#discussion_r1642423116
##########
python/pyspark/sql/tests/test_dataframe_query_context.py:
##########
@@ -41,7 +41,7 @@ def test_dataframe_query_context(self):
error_class="DIVIDE_BY_ZERO",
message_parameters={"config": '"spark.sql.ansi.enabled"'},
query_context_type=QueryContextType.DataFrame,
- pyspark_fragment="divide",
+ fragment="__truediv__",
Review Comment:
In the scala side, we use the call site that is closest to the user code as
the fragment. I don't think `__truediv__` is user-friendly.
For Java, we can find the latest stack trace that is from
`org.apache.spark`, whose next trace is the user code. Can we do the same thing
in Python?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]