[
https://issues.apache.org/jira/browse/MAHOUT-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133255#comment-14133255
]
Andrew Palumbo edited comment on MAHOUT-1615 at 9/14/14 4:19 PM:
-----------------------------------------------------------------
It looks like it has something to do with Spark reusing Objects in partitions.
If I change the minPartitions to 1 I only get a single Key:
{code}
mahout> val rdd = sdc.sequenceFile(path =
"/tmp/mahout-work-andy/20news-test-vectors/part-r-00000", classOf[Writable],
classOf[Writable], minPartitions = 1)
mahout> val keyVec = rdd.map(_._1).collect.distinct
keyVec: Array[org.apache.hadoop.io.Writable] = Array(/talk.religion.misc/84570)
mahout> keyVec.size
res1: Int = 1
{code}
and increasing minPartitions to 1000 gives the missing /alt.atheism/*:
{code}
mahout> val rdd = sdc.sequenceFile(path =
"/tmp/mahout-work-andy/20news-test-vectors/part-r-00000", classOf[Writable],
classOf[Writable], minPartitions = 1000)
rdd: org.apache.spark.rdd.RDD[(org.apache.hadoop.io.Writable,
org.apache.hadoop.io.Writable)] = HadoopRDD[9] at sequenceFile at <console>:35
mahout> val keyVec = rdd.map(_._1).collect.distinct
keyVec: Array[org.apache.hadoop.io.Writable] = Array(/alt.atheism/51146,
/alt.atheism/51157, /alt.atheism/51188, /alt.atheism/51212, /alt.atheism/51222,
/alt.atheism/51239, /alt.atheism/51244, /alt.atheism/51275, /alt.atheism/51284,
/alt.atheism/51298, /alt.atheism/53055, /alt.atheism/53090, /alt.atheism/53113,
/alt.atheism/53140, /alt.atheism/53178, /alt.atheism/53203, /alt.atheism/53221,
/alt.atheism/53248, /alt.atheism/53274, /alt.atheism/53305, /alt.atheism/53325,
/alt.atheism/53333, /alt.atheism/53347, /alt.atheism/53376, /alt.atheism/53412,
/alt.atheism/53436, /alt.atheism/53458, /alt.atheism/53487, /alt.atheism/53515,
/alt.atheism/53541, /alt.atheism/53579, /alt.atheism/53600, /alt.atheism/53632,
/alt.atheism/53673, /alt.atheism/53759, /alt.atheism/53792, /alt.atheism/54127,
/alt...
mahout> keyVec.size
res2: Int = 995
{code}
the suggested fix is:
{code}
rdd.map(_.clone).cache
{code}
though i'm getting an error:
{code}
Access to protected method clone not permitted because
prefix type (org.apache.hadoop.io.Writable, org.apache.hadoop.io.Writable)
does not conform to
class $iwC where the access take place
{code}
when I try this.
was (Author: andrew_palumbo):
It looks like it has something to do with Spark reusing Objects in partitions.
If I change the minPartitions to 1 I only get a single Key:
{code}
mahout> val rdd = sdc.sequenceFile(path =
"/tmp/mahout-work-andy/20news-test-vectors/part-r-00000", classOf[Writable],
classOf[Writable], minPartitions = 1)
mahout> val keyVec = rdd.map(_._1).collect.distinct
keyVec: Array[org.apache.hadoop.io.Writable] = Array(/talk.religion.misc/84570)
mahout> keyVec.size
res1: Int = 1
{code}
and increasing minPartitions to 1000 gives:
{code}
mahout> val rdd = sdc.sequenceFile(path =
"/tmp/mahout-work-andy/20news-test-vectors/part-r-00000", classOf[Writable],
classOf[Writable], minPartitions = 1000)
rdd: org.apache.spark.rdd.RDD[(org.apache.hadoop.io.Writable,
org.apache.hadoop.io.Writable)] = HadoopRDD[9] at sequenceFile at <console>:35
mahout> val keyVec = rdd.map(_._1).collect.distinct
keyVec: Array[org.apache.hadoop.io.Writable] = Array(/alt.atheism/51146,
/alt.atheism/51157, /alt.atheism/51188, /alt.atheism/51212, /alt.atheism/51222,
/alt.atheism/51239, /alt.atheism/51244, /alt.atheism/51275, /alt.atheism/51284,
/alt.atheism/51298, /alt.atheism/53055, /alt.atheism/53090, /alt.atheism/53113,
/alt.atheism/53140, /alt.atheism/53178, /alt.atheism/53203, /alt.atheism/53221,
/alt.atheism/53248, /alt.atheism/53274, /alt.atheism/53305, /alt.atheism/53325,
/alt.atheism/53333, /alt.atheism/53347, /alt.atheism/53376, /alt.atheism/53412,
/alt.atheism/53436, /alt.atheism/53458, /alt.atheism/53487, /alt.atheism/53515,
/alt.atheism/53541, /alt.atheism/53579, /alt.atheism/53600, /alt.atheism/53632,
/alt.atheism/53673, /alt.atheism/53759, /alt.atheism/53792, /alt.atheism/54127,
/alt...
mahout> keyVec.size
res2: Int =
{code}
the suggested fix is:
{code}
rdd.map(_.clone).cache
{code}
though i'm getting an error:
{code}
Access to protected method clone not permitted because
prefix type (org.apache.hadoop.io.Writable, org.apache.hadoop.io.Writable)
does not conform to
class $iwC where the access take place
{code}
when I try this.
> SparkEngine drmFromHDFS returning the same Key for all Key,Vec Pairs for
> Text-Keyed SequenceFiles
> -------------------------------------------------------------------------------------------------
>
> Key: MAHOUT-1615
> URL: https://issues.apache.org/jira/browse/MAHOUT-1615
> Project: Mahout
> Issue Type: Bug
> Reporter: Andrew Palumbo
> Fix For: 1.0
>
>
> When reading in seq2sparse output from HDFS in the spark-shell of form
> <Text,VectorWriteable> SparkEngine's drmFromHDFS method is creating rdds
> with the same Key for all Pairs:
> {code}
> mahout> val drmTFIDF= drmFromHDFS( path =
> "/tmp/mahout-work-andy/20news-test-vectors/part-r-00000")
> {code}
> Has keys:
> {...}
> key: /talk.religion.misc/84570
> key: /talk.religion.misc/84570
> key: /talk.religion.misc/84570
> {...}
> for the entire set. This is the last Key in the set.
> The problem can be traced to the first line of drmFromHDFS(...) in
> SparkEngine.scala:
> {code}
> val rdd = sc.sequenceFile(path, classOf[Writable], classOf[VectorWritable],
> minPartitions = parMin)
> // Get rid of VectorWritable
> .map(t => (t._1, t._2.get()))
> {code}
> which gives the same key for all t._1.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)