[ 
https://issues.apache.org/jira/browse/MAHOUT-372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12855381#action_12855381
 ] 

Kris Jack commented on MAHOUT-372:
----------------------------------

Thanks for your reply.  I'll run it using the command line parameters and 
hopefully get it working faster.  Thanks for letting me know also about the 
other mailing list, I'll use that in the future for such questions.

> Partitioning Collaborative Filtering Job into Maps and Reduces
> --------------------------------------------------------------
>
>                 Key: MAHOUT-372
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-372
>             Project: Mahout
>          Issue Type: Question
>          Components: Collaborative Filtering
>    Affects Versions: 0.4
>         Environment: Ubuntu Koala
>            Reporter: Kris Jack
>            Assignee: Sean Owen
>             Fix For: 0.4
>
>
> I am running the org.apache.mahout.cf.taste.hadoop.item.RecommenderJob main 
> on my hadoop cluster and it partitions the job in 2 although I have more than 
> 2 nodes available.  I was reading that the partitioning could be changed by 
> setting the JobConf's conf.setNumMapTasks(int num) and 
> conf.setNumReduceTasks(int num).
> Would I be right in assuming that this would speed up the processing by 
> increasing these, say to 4)?  Can this code be partitioned into many 
> reducers?  If so, would setting them in the protected AbstractJob::JobConf 
> prepareJobConf() function be appropriate?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to