[
https://issues.apache.org/jira/browse/SPARK-19646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15871597#comment-15871597
]
BahaaEddin AlAila commented on SPARK-19646:
-------------------------------------------
What's puzzling though, is I looked at pyspark's implementation of
binaryRecords, and it's just calling _jsc.binaryRecords and wrapping it with a
pyspark RDD
so, if it is indeed calling the scala implementation, shouldn't pyspark have
the same problem?
> binaryRecords replicates records in scala API
> ---------------------------------------------
>
> Key: SPARK-19646
> URL: https://issues.apache.org/jira/browse/SPARK-19646
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.0.0, 2.1.0
> Reporter: BahaaEddin AlAila
> Assignee: Sean Owen
>
> The scala sc.binaryRecords replicates one record for the entire set.
> for example, I am trying to load the cifar binary data where in a big binary
> file, each 3073 represents a 32x32x3 bytes image with 1 byte for the label
> label. The file resides on my local filesystem.
> .take(5) returns 5 records all the same, .collect() returns 10,000 records
> all the same.
> What is puzzling is that the pyspark one works perfectly even though
> underneath it is calling the scala implementation.
> I have tested this on 2.1.0 and 2.0.0
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]