xkrogen commented on code in PR #37634:
URL: https://github.com/apache/spark/pull/37634#discussion_r955234541
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/GenerateUnsafeProjection.scala:
##########
@@ -252,28 +264,43 @@ object GenerateUnsafeProjection extends
CodeGenerator[Seq[Expression], UnsafePro
""".stripMargin
}
+ /**
+ * Wrap `inputExpr` in a try-catch block that will catch any
[[NullPointerException]] that is
+ * thrown, instead throwing a (more helpful) error message as provided by
+ *
[[org.apache.spark.sql.errors.QueryExecutionErrors.valueCannotBeNullError]].
+ */
+ private def wrapWithNpeHandling(inputExpr: String, descPath: Seq[String]):
String =
+ s"""
+ |try {
+ | ${inputExpr.trim}
+ |} catch (NullPointerException npe) {
+ | throw
QueryExecutionErrors.valueCannotBeNullError("${descPath.mkString(".")}");
Review Comment:
Good catch! I can't believe the ridiculous stuff Spark will accept as a
valid column name. Fixed and added a test for this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]