Github user MLnick commented on the pull request:
https://github.com/apache/spark/pull/455#issuecomment-45803692
I had planned to take a look at that next. If you want to take a crack at
it I'm happy to review.
You'd have to more or less do things in reverse. The PySpark code for spark
SQL does something similar to what would be required, ie takes an RDD of bytes
(pickled objects from python) and deserializes them to Java objects. Finally
you'll need to convert those to Writables so you can use the relevant output
format.
â
Sent from Mailbox
On Wed, Jun 11, 2014 at 1:55 AM, kanzhang <[email protected]>
wrote:
> @MLnick hey Nick, are you planning on implementing the reverse direction
- ```saveAsSequenceFile``` and ```saveAsHadoopFile```? If not, I'd like to give
it a shot.
> ---
> Reply to this email directly or view it on GitHub:
> https://github.com/apache/spark/pull/455#issuecomment-45687030
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---