The code you have been using is quite separate from the Hadoop-based
recommender code. There is really no way to just modify it a little to
use Hadoop. The Hadoop-based code in org.apache.mahout.cf.taste.hadoop
operates as a big batch job that computes results in bulk, not
real-time.

The closest thing to combining the two is
org.apache.mahout.cf.taste.hadoop.pseudo, which lets you run many
non-distributed recommenders on Hadoop. This doesn't "really" scale --
the non-distributed recommenders still run out of memory at some point
-- but at least lets you leverage Hadoop's scheduling to run the same
process on many machines.

I am currently working on a related project / company called Myrrix
(http://myrrix.com/) which may exactly interesting for your case. It
is an attempt to expose the real-time Taste APIs in a way that you can
also plug in Hadoop on the backend for processing.

Sean

On Thu, Apr 12, 2012 at 10:10 PM,  <[email protected]> wrote:
> Hi,
>    I am working with Mahout & Hadoop/Map-Reduce to build a scalable 
> recommendation engine. Right now I am using the Taste library APIs for 
> providing recommendations. I have been using the recommendation APIs provided 
> by Mahout such as the GenericItemBasedRecommender, Slope-One etc.
> I have been able to get it to work standalone, but I need to run it as a Map 
> Reduce job for improved scalability.
> Could you let me know a good starting point as to how I may go about using 
> the Taste library APIs to provide recommendations, and have the internal 
> processing done using Map/Reduce.?
>
> As of now I believe that I need to modify the 
> GenericItemBasedRecommender.java file for this.
>
> Appreciate your help!
>
> Thank You,
>
> Sincerely,
>
> Devadas Mallya.

Reply via email to