MaxGekk commented on code in PR #45457:
URL: https://github.com/apache/spark/pull/45457#discussion_r1538627497
##########
sql/core/src/test/resources/sql-tests/results/udtf/udtf.sql.out:
##########
@@ -681,98 +679,120 @@ SELECT * FROM
InvalidEvalReturnsNoneToNonNullableColumnScalarType(TABLE(t2))
-- !query schema
struct<>
-- !query output
-org.apache.spark.api.python.PythonException
-pyspark.errors.exceptions.base.PySparkRuntimeError: [UDTF_EXEC_ERROR] User
defined table function encountered an error in the 'eval' or 'terminate'
method: Column 0 within a returned row had a value of None, either directly or
within array/struct/map subfields, but the corresponding column type was
declared as non-nullable; please update the UDTF to return a non-None value at
this location or otherwise declare the column type as nullable.
Review Comment:
The deleted error message seems reasonable. Do you know why it is replaced?
##########
core/src/test/scala/org/apache/spark/rdd/PairRDDFunctionsSuite.scala:
##########
@@ -614,7 +614,7 @@ class PairRDDFunctionsSuite extends SparkFunSuite with
SharedSparkContext {
val e = intercept[SparkException] {
pairs.saveAsNewAPIHadoopFile[NewFakeFormatWithCallback]("ignored")
}
- assert(e.getCause.getMessage contains "failed to write")
+ assert(e.getCause.getMessage contains "Task failed while writing rows")
Review Comment:
how it happens that you have to change this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]