Hi Alessandro, I'm not quite sure I understand what you are trying to accomplish.
Usually you would use RecommenderJob to precompute recommendations and then feed them in any way (database, Solr-server) into your livesystem. You can also just use ItemSimilarityJob to precompute the item-item similarities, copy the resulting files to your live system, load them via FileItemSimilarity and have Taste compute the recommendations online after that. Another option is to completely neglect the use of hadoop and have Taste compute everything in realtime. I suggest you try this first and see if it fits your usecase as it's the most simple and convenient path to take. --sebastian Am 30.12.2010 14:45, schrieb Alessandro Binhara: > Hello everyone > > I am studying the RecommenderJob to run a recommendation system in hadoop. > I have currently DataModule are loaded as a singleton, and it are cached in > memory. I have a servlet responds to requests sent to the mahout. > > Using this RecommenderJob on hadoop. The RecommenderJob will every time > load a datamodel from HDFS files and then processing the recommendation? > > It is possible to use some strategy to get this cache in the cluster? > > The response of the recommendation will be written in HDFS, how do I > identify the answer? Is there any job ID in hadoop? > > thank“s >
