cloud-fan commented on a change in pull request #33654:
URL: https://github.com/apache/spark/pull/33654#discussion_r683429396
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
##########
@@ -330,12 +330,8 @@ class JacksonParser(
case udt: UserDefinedType[_] =>
makeConverter(udt.sqlType)
- case _ =>
- (parser: JsonParser) =>
- // Here, we pass empty `PartialFunction` so that this case can be
- // handled as a failed conversion. It will throw an exception as
- // long as the value is not null.
- parseJsonToken[AnyRef](parser,
dataType)(PartialFunction.empty[JsonToken, AnyRef])
+ // We don't actually hit this exception though, we keep it for
understandability
+ case _ => throw QueryExecutionErrors.unsupportedTypeError(dataType)
Review comment:
unsupported data type is kind of a fatal error and happens very rarely.
I think throwing an exception makes more sense, as this should not happen. We
should finish the TIMESTAMP NTZ support before Spark 3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]