Github user dwmclary commented on the pull request:
https://github.com/apache/spark/pull/3213#issuecomment-63194531
@davies -- that's much cleaner; thanks! I think unicode should be default,
but optional for the deserializer so I added that to the method.
@yhuai <https://github.com/yhuai>, I followed after @NathanHowell
<https://github.com/NathanHowell>'s approach; if we're ever going to have
columns which are Scala collections, it's better than using ObjectMapper on
a Java HashMap.
This should be close to ready.
Also, I'm not writing nulls to the JSON string. There's too much
variability in JSON parsers around nulls; I believe it's better to avoid
the issue altogether.
Cheers,
Dan
On Fri, Nov 14, 2014 at 8:03 PM, Davies Liu <[email protected]>
wrote:
> I think it should be
>
> def toJsonRDD(self):
> rdd = self._jschema_rdd.baseSchemaRDD().toJsonRDD()
> return RDD(rdd.toJavaRDD(), self.ctx, UTF8Deserializer())
>
> â
> Reply to this email directly or view it on GitHub
> <https://github.com/apache/spark/pull/3213#issuecomment-63159199>.
>
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]