Github user gengliangwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/21439#discussion_r192488365
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -101,6 +102,13 @@ class JacksonParser(
}
}
+ private def makeArrayRootConverter(at: ArrayType): JsonParser =>
Seq[InternalRow] = {
+ val elemConverter = makeConverter(at.elementType)
+ (parser: JsonParser) => parseJsonToken[Seq[InternalRow]](parser, at) {
+ case START_ARRAY => Seq(InternalRow(convertArray(parser,
elemConverter)))
--- End diff --
In line 87:
```
val array = convertArray(parser, elementConverter)
// Here, as we support reading top level JSON arrays and take every
element
// in such an array as a row, this case is possible.
if (array.numElements() == 0) {
Nil
} else {
array.toArray[InternalRow](schema).toSeq
}
```
Should we also follow this?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]