[
https://issues.apache.org/jira/browse/SPARK-910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-910.
-----------------------------
Resolution: Not a Problem
Given the PR discussion, it looks like this was resolved as NotAProblem. Either
the InputFormat has to create new key/value objects, or the caller in Spark
does.
> hadoopFile creates RecordReader key and value at the wrong scope
> ----------------------------------------------------------------
>
> Key: SPARK-910
> URL: https://issues.apache.org/jira/browse/SPARK-910
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 0.7.3
> Reporter: aaron babcock
>
> I'm not a scala or hadoop expert so forgive me if I'm in the wrong but it
> seems to me that SparkContext.hadoopFile is broken.
> hf = sc.hadoopFile("hdfs://namenod.local/something.xml",
> XmlInputFormat.class, LongWritable.class, Text.class);
> hf.take(5);
> produces the same record over and over instead of iterating.
> Here is a pull request for a proposed fix:
> https://github.com/mesos/spark/pull/934
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]