Sergei Lebedev created SPARK-19353:
--------------------------------------

             Summary: Support binary I/O in PipedRDD
                 Key: SPARK-19353
                 URL: https://issues.apache.org/jira/browse/SPARK-19353
             Project: Spark
          Issue Type: Improvement
            Reporter: Sergei Lebedev
            Priority: Minor


The current design of RDD.pipe is very restrictive. 

It is line-based, each element of the input RDD [gets 
serialized|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala#L143]
 into one or more lines. Similarly for the output of the child process, one 
line corresponds to a single element of the output RDD. 

It allows to customize the output format via {{printRDDElement}}, but not the 
input format.

It is not designed for extensibility. The only way to get a "BinaryPipedRDD" is 
to copy/paste most of it and change the relevant parts.

These limitations have been discussed on 
[SO|http://stackoverflow.com/questions/27986830/how-to-pipe-binary-data-in-apache-spark]
 and the mailing list, but alas no issue has been created.

A possible solution to at least the first two limitations is to factor out the 
format into a separate object (or objects). For instance, {{InputWriter}} and 
{{OutputReader}}, following Hadoop streaming API. 

{code}
trait InputWriter[T] {
    def write(os: OutputStream, elem: T)
}

trait OutputReader[T] {
    def read(is: InputStream): T
}
{code}

The default configuration would be to write and read in line-based format, but 
the users will also be able to selectively swap those to the appropriate 
implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to