Github user falaki commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91170861
--- Diff: core/src/main/scala/org/apache/spark/api/r/SerDe.scala ---
@@ -272,18 +282,22 @@ private[spark] object SerDe {
}
}
- private def writeKeyValue(dos: DataOutputStream, key: Object, value:
Object): Unit = {
+ private def writeKeyValue(
+ dos: DataOutputStream,
+ key: Object,
+ value: Object,
+ jvmObjectTracker: JVMObjectTracker): Unit = {
if (key == null) {
throw new IllegalArgumentException("Key in map can't be null.")
} else if (!key.isInstanceOf[String]) {
throw new IllegalArgumentException(s"Invalid map key type:
${key.getClass.getName}")
}
writeString(dos, key.asInstanceOf[String])
- writeObject(dos, value)
+ writeObject(dos, value, jvmObjectTracker)
}
- def writeObject(dos: DataOutputStream, obj: Object): Unit = {
+ def writeObject(dos: DataOutputStream, obj: Object, jvmObjectTracker:
JVMObjectTracker): Unit = {
--- End diff --
Do we have a policy against implicit parameter in Spark? If you make
`JVMObjectTracker` an implicit parameter your code will become a lot cleaner.
Basically you won't need to change every place `writeObject()` is called.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]