Yes, the evaluator uses randomness to pick test data and so on. You will get different results when you run the evaluation multiple times. However yes, as you see, if you fix the random number generator seed, then you should at least still see the same first result, and same second result, and so on. It does not somehow reset the seeds between evaluation runs, no.
On Tue, Dec 15, 2009 at 6:10 PM, jamborta <[email protected]> wrote: > > hi there, > > sorry, one more question today. > > I have this evaluator which based on your > AbstractDifferenceRecommenderEvaluator, I wanted to run it using > RandomUtils.useTestSeed(), to make sure that I get the same results. for > example if I have this method: > public void recommend() { > try { > final DataModel model = new FileDataModel(new > File("./data/test_data.data")); > RecommenderBuilder build = new SVDBuilder(); > DataModelBuilder model2 = null; > IREvaluatorNoParallel evaluate = new IREvaluatorNoParallel(); > evaluate.evaluate(build, model2, model, 0.6, 1.0); > } catch (FileNotFoundException e) { > e.printStackTrace(); > > } catch (TasteException e) { > e.printStackTrace(); > } > } > I would like to have the same result if I call it more than one times. > however, if i do this: > > RandomUtils.useTestSeed(); > test.recommend(); > test.recommend(); > > It gives me a differrent result for the second times. but if I run it again, > it gives me the same different results. not sure if i understand how i > works. is it possible to do this? > > thanks > -- > View this message in context: > http://old.nabble.com/RandomUtils.useTestSeed%28%29---Taste-libraries-tp26799137p26799137.html > Sent from the Mahout User List mailing list archive at Nabble.com. > >
