jerrypeng commented on code in PR #48985:
URL: https://github.com/apache/spark/pull/48985#discussion_r1865180338
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TransformWithStateExec.scala:
##########
@@ -83,6 +83,17 @@ case class TransformWithStateExec(
// dummy value schema, the real schema will get during state variable init
time
private val DUMMY_VALUE_ROW_SCHEMA = new StructType().add("value",
BinaryType)
+ // We need to just initialize key and value deserializer once per partition.
+ // The deserializers need to be lazily created on the executor since they
+ // are not serializable.
+ // Ideas for for improvement can be found here:
+ // https://issues.apache.org/jira/browse/SPARK-50437
+ private lazy val getKeyObj =
Review Comment:
I think it doesn't hurt to add "@transient" here and perhaps it is best
practice (I am not a scala expert) though it will likely not really help us
here. If in the future, someone makes a change that executes / evaluates
getKeyObj or getValueObj on the driver first, a task serialization exception
will be thrown as "getKeyObj" and "getValueObj" or what is returned by "
ObjectOperator.deserializeRowToObject" is not serializable. Our tests for
transformWithState should catch this issue.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]