Are you using the "pseudo-distributed" RecommenderJob? There are a few
RecommenderJobs!

Can you cache a DataModel in memory across workers in a cluster? No --
the workers are perhaps not on the same machine, or even in the same
datacenter. Each worker would have to load its own.

But it sounds a bit like you are trying to have a servlet make
recommendations in real-time by calling out to Hadoop. This will never
work. Hadoop is a big batch-oriented framework.

What you can do is pre-compute recommendations with Hadoop, as you are
doing, and write to HDFS. Then the servlet can load recs from HDFS,
yes. No problem there.

On Thu, Dec 30, 2010 at 7:45 AM, Alessandro Binhara <[email protected]> wrote:
> Hello everyone
>
> I am studying the RecommenderJob to run a recommendation system in hadoop.
> I have currently DataModule are loaded as a singleton, and it are  cached in
> memory. I have a servlet responds to requests sent to the mahout.
>
> Using this RecommenderJob on hadoop. The RecommenderJob will  every time
> load a datamodel from  HDFS  files and then processing the recommendation?
>
> It is possible to use some strategy to get this cache in the cluster?
>
> The response of the recommendation will be written in HDFS, how do I
> identify the answer? Is there any job ID in hadoop?
>
> thank´s
>

Reply via email to