Go with 0.8.  Definitely.

Hadoop scaleout should be easy.


On Wed, Jul 31, 2013 at 4:19 PM, Rafal Lukawiecki <
[email protected]> wrote:

> Thank you!
>
> In general, should I be putting our efforts into using 0.8 or stick with
> 0.7 for now, re RecommenderJob?
>
> On another note, which might be a different thread, but would you have any
> ready-made accuracy and reliability validation code to suggest when using
> RecommenderJob, or do I need to stick with predicting from test data/test
> partitions, and analysing resulting confusion matrices in R etc? Anything
> turnkey aides to entice new users.
>
> Rafal
>
> Ps. Another reason for using RJ in our use case is the hopeful, assumed
> promise of a Hadoop-derived scale-out, when needed in the near future.
> Mixed results so far on that end.
> --
> Rafal Lukawiecki
> Pardon my brevity, sent from a telephone.
>
> On 1 Aug 2013, at 00:09, "Ted Dunning" <[email protected]> wrote:
>
> > On Wed, Jul 31, 2013 at 4:06 PM, Rafal Lukawiecki <
> > [email protected]> wrote:
> >
> >> Many thanks, I'll report the issue, when I figure out where. :)
> >
> > I can help with that!
> >
> > https://issues.apache.org/jira/browse/MAHOUT
>

Reply via email to