[ 
https://issues.apache.org/jira/browse/MAHOUT-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088916#comment-14088916
 ] 

ASF GitHub Bot commented on MAHOUT-1594:
----------------------------------------

Github user andrewpalumbo commented on a diff in the pull request:

    https://github.com/apache/mahout/pull/38#discussion_r15920526
  
    --- Diff: examples/bin/factorize-movielens-1M.sh ---
    @@ -66,12 +82,17 @@ $MAHOUT recommendfactorized --input 
${WORK_DIR}/als/out/userRatings/ --output ${
     
     # print the error
     echo -e "\nRMSE is:\n"
    -cat ${WORK_DIR}/als/rmse/rmse.txt
    +$CAT ${WORK_DIR}/als/rmse/rmse.txt
     echo -e "\n"
     
     echo -e "\nSample recommendations:\n"
    -shuf ${WORK_DIR}/recommendations/part-m-00000 |head
    +$CAT ${WORK_DIR}/recommendations/part-m-00000 |shuf |head
     echo -e "\n\n"
     
     echo "removing work directory"
    -rm -rf ${WORK_DIR}
    \ No newline at end of file
    +if [ $MAHOUT_LOCAL -eq 1 ]
    --- End diff --
    
    The  MAHOUT_HOME variable will very likly be set either "true", or unset. a 
simple check for it being set will do rather than checking that it is set to an 
integer.  


> Example factorize-movielens-1M.sh does not use HDFS
> ---------------------------------------------------
>
>                 Key: MAHOUT-1594
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1594
>             Project: Mahout
>          Issue Type: Bug
>          Components: Examples
>    Affects Versions: 0.9
>         Environment: Hadoop version: 2.4.0.2.1.1.0-385
> Git hash: 2b65475c3ab682ebd47cffdc6b502698799cd2c8 (trunk)
>            Reporter: jaehoon ko
>            Priority: Minor
>              Labels: newbie, patch
>             Fix For: 1.0
>
>         Attachments: MAHOUT-1594.patch
>
>
> It seems that factorize-movielens-1M.sh does not use HDFS at all. All paths 
> look local paths, not HDFS. So the example crashes immeidately because it 
> cannot find input data from HDFS:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does 
> not exist: /tmp/mahout-work-hoseog.lee/movielens/ratings.csv
>         at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:320)
>         at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:263)
>         at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:375)
>         at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>         at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>         at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
>         at 
> org.apache.mahout.cf.taste.hadoop.als.DatasetSplitter.run(DatasetSplitter.java:94)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>         at 
> org.apache.mahout.cf.taste.hadoop.als.DatasetSplitter.main(DatasetSplitter.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>         at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
>         at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:153)
>         at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to