Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19492#discussion_r144835433
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -343,6 +367,25 @@ class JacksonParser(
record: T,
createParser: (JsonFactory, T) => JsonParser,
recordLiteral: T => UTF8String): Seq[InternalRow] = {
+ parseWithArrayOfPrimitiveSupport(record, createParser, recordLiteral)
match {
+ case rows: Seq[InternalRow] => rows
+ case _: Seq[_] => throw BadRecordException(() =>
recordLiteral(record), () => None,
+ new RuntimeException("Conversion of array of primitive data is not
yet supported here."))
--- End diff --
To clarify it, I think the only way we throw this exception is passing an
`ArrayType` into `JacksonParser` constructor and call `parse` instead of
`parseWithArrayOfPrimitiveSupport`. Because `JacksonParser` is internally used,
I assume this usage is intentional and the developer will get the exception
right away.
So I didn't think this exception will possibly be seen by end user, unless
we ship such broken codes to users in Spark releases.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]