[
https://issues.apache.org/jira/browse/SPARK-19646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen reopened SPARK-19646:
-------------------------------
Ah, I take it back. With that info I think this is in fact a problem. Although
the problem is indeed because of Hadoop reusing Writables, this is not a case
where the user is touching Writables. binaryRecords is getting the byte[] from
a BytesWritable but actually this reference is the same every time, including
the internal byte array. It needs to be copied. Simple fix.
> binaryRecords replicates records in scala API
> ---------------------------------------------
>
> Key: SPARK-19646
> URL: https://issues.apache.org/jira/browse/SPARK-19646
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.0.0, 2.1.0
> Reporter: BahaaEddin AlAila
> Priority: Minor
>
> The scala sc.binaryRecords replicates one record for the entire set.
> for example, I am trying to load the cifar binary data where in a big binary
> file, each 3073 represents a 32x32x3 bytes image with 1 byte for the label
> label. The file resides on my local filesystem.
> .take(5) returns 5 records all the same, .collect() returns 10,000 records
> all the same.
> What is puzzling is that the pyspark one works perfectly even though
> underneath it is calling the scala implementation.
> I have tested this on 2.1.0 and 2.0.0
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]