I have now tested on a fresh cluster of Cloudera 5.2. Mahout 0.9 comes
installed with it.
My input data is just five lines, tab-separated. I have typed this data
myself. So
I do not expect anything else in this data.
11001
12005
14001
22002
23001
I use
The problem is that seqdirectory doesn't do what you want. From the
documentation page:
The output of seqDirectory will be a Sequence file Text, Text of
all documents (/sub-directory-path/documentFileName, documentText).
Please see
Thank you for the reply.
I proceeded as per the Example listed in Apache Mahout help page at this
link https://mahout.apache.org/users/recommender/intro-als-hadoop.html:
https://mahout.apache.org/users/recommender/intro-als-hadoop.html
As per Step 4 of this link, after creation of sequence
If I don't miss it, the documentation in the link doesn't say anything
about using seqdirectory.
I don't remember how it works in 0.7, but it basically says:
Given a file of lines of userId\titemId\trating,
1- run mahout parallelALS
2- run mahout recommendfactorized
The input file for the 2nd
Well, I have tried again. The Mahout documentation at this link (
https://mahout.apache.org/users/recommender/intro-als-hadoop.html )
says that once user and item features have been obtained, we proceed as
follows:
1. For users we now want to make recommendations, we list them in a
sequence file
Looks like maybe a mismatch between mahout version you compiled code against
and the mahout version installed in the cluster?
On Nov 24, 2014, at 8:08 AM, Ashok Harnal ashokhar...@gmail.com wrote:
Thanks for reply. Here are the facts:
1. I am using mahout shell command and not a java
Thanks for reply. I did not compile mahout. Mahout 0.9 comes along with
Cloudera 5.2.
Ashok Kumar Harnal
On 24 November 2014 at 18:42, jayunit...@gmail.com wrote:
Looks like maybe a mismatch between mahout version you compiled code
against and the mahout version installed in the cluster?
The error message that you got indicated that some input was textual and
needed to be an integer.
Is there a chance that the type of some of your input is incorrect in your
sequence files?
On Mon, Nov 24, 2014 at 3:47 PM, Ashok Harnal ashokhar...@gmail.com wrote:
Thanks for reply. I did not
Thanks for the reply. I will recheck and repeat the experiment using
self-typed input.
I am reinstalling Cloudera 5.2.
Ashok Kumar Harnal
On 24 November 2014 at 21:38, Ted Dunning ted.dunn...@gmail.com wrote:
The error message that you got indicated that some input was textual and
needed to
I upgraded to mahout 0.9. The same error persists. Here is the full dump.
Incidentally, I am using local file system and not hadoop.
[ashokharnal@master ~]$ mahout recommendfactorized --input
/user/ashokharnal/seqfiles --userFeatures $res_out_file/U/ --itemFeatures
$res_out_file/M/
Can you paste a sample of your input data? The exception is this:
ava.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to
org.apache.hadoop.io.IntWritable
at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:406)
On Nov 23, 2014, at 4:31 AM, Ashok Harnal
I use mahout 0.7 installed in Cloudera. After creating user-feature and
item-feature matrix in hdfs, I run the following command:
mahout recommendfactorized --input /user/ashokharnal/seqfiles
--userFeatures $res_out_file/U/ --itemFeatures $res_out_file/M/
--numRecommendations 1 --output
Please upgrade to Mahout version 0.9, as many things have been fixed since.
On Nov 22, 2014, at 7:00 PM, Ashok Harnal ashokhar...@gmail.com wrote:
I use mahout 0.7 installed in Cloudera. After creating user-feature and
item-feature matrix in hdfs, I run the following command:
mahout
13 matches
Mail list logo