[ 
https://issues.apache.org/jira/browse/MAHOUT-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Eastman updated MAHOUT-397:
--------------------------------

    Attachment: MAHOUT-397.patch

This patch seems to resolve the issue by propagating the number of reducers 
argument through to the back-end processing steps where the actual output 
vectors are produced. It also includes a slight modification to 
SequenceFilesFromDirectory to remove chunk-size upsizing to 64mb which allows 
Reuters data to be split into 3 smaller files to improve processing. All unit 
tests run.

Files modified:
M       core/src/main/java/org/apache/mahout/clustering/lda/LDADriver.java
M       
utils/src/test/java/org/apache/mahout/utils/vectors/text/DictionaryVectorizerTest.java
M       
utils/src/main/java/org/apache/mahout/utils/vectors/text/DictionaryVectorizer.java
M       
utils/src/main/java/org/apache/mahout/utils/vectors/common/PartialVectorMerger.java
M       
utils/src/main/java/org/apache/mahout/utils/vectors/tfidf/TFIDFConverter.java
M       
utils/src/main/java/org/apache/mahout/text/SparseVectorsFromSequenceFiles.java
M       
examples/src/main/java/org/apache/mahout/text/SequenceFilesFromDirectory.java
M       examples/bin/build-reuters.sh

The attached build-reuters.sh runs LDA iterations in about 1.5 min vs. 5.5 min 
with a single vector file on a 3-node cluster using 3 mappers and 2-3 reducers 
for the vectorization. I will commit it in a day or so but want some more 
eyeballs on it since this is new code for me.

> SparseVectorsFromSequenceFiles only outputs a single vector file
> ----------------------------------------------------------------
>
>                 Key: MAHOUT-397
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-397
>             Project: Mahout
>          Issue Type: Improvement
>          Components: Utils
>    Affects Versions: 0.3
>            Reporter: Jeff Eastman
>            Assignee: Jeff Eastman
>             Fix For: 0.4
>
>         Attachments: MAHOUT-397.patch
>
>
> When running LDA via build-reuters.sh on a 3-node Hadoop cluster, I've 
> noticed that there is only a single vector file produced by the utility 
> preprocessing steps. This means LDA (and other clustering too) can only use a 
> single mapper no matter how large the cluster is. Investigating, it seems 
> that the program argument (-nr) for setting the number of reducers - and 
> hence the number of output files - is not propagated to the final stages 
> where the output vectors are created.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to