[ https://issues.apache.org/jira/browse/MAHOUT-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133249#comment-14133249 ]
Andrew Palumbo commented on MAHOUT-1615: ---------------------------------------- actually Looks like the like above is not where everything is getting dropped looking closer in the mahout spark-shell I several other Keys (though not all): {code} mahout> val rdd = sdc.sequenceFile(path = "/tmp/mahout-work-andy/20news-test-vectors/part-r-00000", classOf[Writable], classOf[VectorWritable], minPartitions = 10).map(t => (t._1, t._2.get())) mahout> val keyVec = rdd.map(_._1).collect.distinct keyVec: Array[org.apache.hadoop.io.Writable] = Array(/comp.os.ms-windows.misc/9141, /comp.sys.mac.hardware/52007, /rec.autos/101620, /rec.sport.baseball/104334, /sci.crypt/15200, /sci.electronics/54486, /sci.space/61469, /talk.politics.guns/54503, /talk.politics.mideast/77353, /talk.religion.misc/84570) mahout> keyVec.size res1: Int = 10 {code} however I'm expecting several more disticnt values for each category eg.: /comp.os.ms-windows.misc/9141 /comp.os.ms-windows.misc/9142 {...} {code} mahout seqdumper -i /tmp/mahout-work-andy/20news-test-vectors/part-r-00000 | less {code} shows the first entry is for: Key: /alt.atheism/51119 which doesn't seem be showing up at all in the keys read in from the SparkContext. > SparkEngine drmFromHDFS returning the same Key for all Key,Vec Pairs for > Text-Keyed SequenceFiles > ------------------------------------------------------------------------------------------------- > > Key: MAHOUT-1615 > URL: https://issues.apache.org/jira/browse/MAHOUT-1615 > Project: Mahout > Issue Type: Bug > Reporter: Andrew Palumbo > Fix For: 1.0 > > > When reading in seq2sparse output from HDFS in the spark-shell of form > <Text,VectorWriteable> SparkEngine's drmFromHDFS method is creating rdds > with the same Key for all Pairs: > {code} > mahout> val drmTFIDF= drmFromHDFS( path = > "/tmp/mahout-work-andy/20news-test-vectors/part-r-00000") > {code} > Has keys: > {...} > key: /talk.religion.misc/84570 > key: /talk.religion.misc/84570 > key: /talk.religion.misc/84570 > {...} > for the entire set. This is the last Key in the set. > The problem can be traced to the first line of drmFromHDFS(...) in > SparkEngine.scala: > {code} > val rdd = sc.sequenceFile(path, classOf[Writable], classOf[VectorWritable], > minPartitions = parMin) > // Get rid of VectorWritable > .map(t => (t._1, t._2.get())) > {code} > which gives the same key for all t._1. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)