Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18875#discussion_r137975214
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonGenerator.scala
---
@@ -193,14 +228,30 @@ private[sql] class JacksonGenerator(
*
* @param row The row to convert
*/
- def write(row: InternalRow): Unit = writeObject(writeFields(row, schema,
rootFieldWriters))
+ def write(row: InternalRow): Unit = dataType match {
+ case st: StructType =>
+ writeObject(writeFields(row, st, rootFieldWriters))
+ case _ => throw new UnsupportedOperationException(
+ s"`JacksonGenerator` can only be used to write out a row when
initialized with `StructType`.")
+ }
/**
- * Transforms multiple `InternalRow`s to JSON array using Jackson
+ * Transforms multiple `InternalRow`s or `MapData`s to JSON array using
Jackson
*
- * @param array The array of rows to convert
+ * @param array The array of rows or maps to convert
*/
def write(array: ArrayData): Unit = writeArray(writeArrayData(array,
arrElementWriter))
+ /**
+ * Transforms a `MapData` to JSON object using Jackson
+ *
+ * @param map a map to convert
+ */
+ def write(map: MapData): Unit = dataType match {
+ case mt: MapType => writeObject(writeMapData(map, mt,
mapElementWriter))
+ case _ => throw new UnsupportedOperationException(
--- End diff --
Similar to https://github.com/apache/spark/pull/18875/files#r137975134, we
can avoid this. Doing pattern matching per writing seems too burden.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]