Github user frreiss commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r59598252
--- Diff: core/src/main/scala/org/apache/spark/api/r/SerDe.scala ---
@@ -355,6 +355,13 @@ private[spark] object SerDe {
writeInt(dos, v.length)
v.foreach(elem => writeObject(dos, elem))
+ // Handle Properties
--- End diff --
Personally I don't think that special-casing the Properties object here is
a major problem -- java.util.Properties is a very commonly used class, and it
would make sense for the RPC layer of SparkR to handle Properties alongside
other common types like Map and String. But it makes sense to defer to Shivaram
on this point. I would vote for option (2) above.
Note that, as far as I can see, the code here to pass a Properties object
back to R is only triggered by the test cases in this PR. The actual code for
invoking `read.jdbc()` only _writes_ to Properties objects.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]