Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19492#discussion_r144757046
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -343,6 +367,25 @@ class JacksonParser(
record: T,
createParser: (JsonFactory, T) => JsonParser,
recordLiteral: T => UTF8String): Seq[InternalRow] = {
+ parseWithArrayOfPrimitiveSupport(record, createParser, recordLiteral)
match {
+ case rows: Seq[InternalRow] => rows
+ case _: Seq[_] => throw BadRecordException(() =>
recordLiteral(record), () => None,
+ new RuntimeException("Conversion of array of primitive data is not
yet supported here."))
--- End diff --
This exception looks a bit weird. How about `` `parse` is only used to
parse the JSON input to the set of `InternalRow`s. Use
`parseWithArrayOfPrimitiveSupport` when paring array of primitive data is
needed``?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]