hvanhovell commented on a change in pull request #26013: [SPARK-29347][SQL] Add
JSON serialization for external Rows
URL: https://github.com/apache/spark/pull/26013#discussion_r332182808
##########
File path: sql/catalyst/src/main/scala/org/apache/spark/sql/Row.scala
##########
@@ -501,4 +513,88 @@ trait Row extends Serializable {
private def getAnyValAs[T <: AnyVal](i: Int): T =
if (isNullAt(i)) throw new NullPointerException(s"Value at index $i is
null")
else getAs[T](i)
+
+ /** The compact JSON representation of this row. */
+ def json: String = compact(jsonValue)
Review comment:
So two things to consider here.
I want to use this in StreamingQueryProgress right? All the JSON
serialization there is based on the json4s AST and not strings (which is what
JacksonGenerator produces).
There is a difference between it being slow, and what you are suggesting.
The latter being crazy inefficient. Let's break that down:
- Row to InternalRow conversion. You will need to create a converter per row
because there is currently no way we can safely cache a converter. You can
either use `ScalaReflection` or `RowEncoder` here, the latter is particularly
bad because it uses code generation (which takes in the order of mills and
which is weakly cached on the driver).
- Setting up the JacksonGenerator, again this is uncached and we need to set
up the same thing for each tuple.
- Generating the string.
Do you see my point here? Or shall I write a benchmark?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]