Yup that's it. And you can throw out the last column. And then it's in the
standard input format for any of these jobs.

If I recall, you can do this on a Unix command line with something like

tr -s ':' ',' < ratings.dat | cut -f1-3 -d, > ratings.txt

On Sat, Nov 20, 2010 at 10:14 AM, Stefano Bellasio <
[email protected]> wrote:

> Thanks for the answer :) Well right now i have a ratings.dat file, what i
> have to do? convert it as you said in CSV with :: instead of , ? Thanks
> Il giorno 20/nov/2010, alle ore 11.12, Sean Owen ha scritto:
>
> > It's the exact same process -- what does "doesn't work" mean? what error?
> >
> > The process of converting the data to CSV is of course entirely
> different.
> > You would not apply that part to such a different input. Just use a text
> > processing tool to convert GroupLens's file to replace "::" with "," and
> > remove the last column.
> >
> > On Sat, Nov 20, 2010 at 10:06 AM, Stefano Bellasio <
> > [email protected]> wrote:
> >
> >> Hi, i want to know which are the correct steps in order to use my
> grouplens
> >> data set (10M ratings) with Hadoop and RecommenderJob, with pseudo or
> not
> >> pseudo item based recommender. Right now i tried to follow the example
> of
> >> Wikipedia data set saw in Mahout in Action, but doesn't works or i don't
> >> understood how i can use RecommenderJob with the data set. Someone can
> >> explan something about this? Thanks :) Stefano
>
>

Reply via email to