Presumably in the result of the evaluation -- average absolute
difference in actual/estimated preference.

The eval trains with a random subset of the data and tests with the rest.

I just realized from your other mail that you are using a data set
with 10,000 ratings only. That's fairly small and I wouldn't be
surprised if the random choice of training set begins to be
significant to the model.

You could try 100K ratings or more simply to see if that's the issue;
I don't know that it is.

On Tue, Aug 31, 2010 at 6:08 PM, Ted Dunning <[email protected]> wrote:
> A 20% spread in what?
>
> Speed?  Results?  Iterations?
>
> On Mon, Aug 30, 2010 at 11:26 PM, Lance Norskog <[email protected]> wrote:
>
>> SVDRecommender is really sensitive to the random number seed. AADRE
>> gives about a 20% spread in its evaluations.  (I have only tried
>> AverageAbsoluteDifferenceRecommenderEvaluator.)
>>
>> This test is on the GroupLens small 10k dataset. I'm using the example
>> GroupLensEvaluatorRunner.main. I substituted the SVDRecommender for
>> the
>> SlopeOneRecommender in the example. Otherwise it is the GroupLens
>> example. How many features and how many iterations are needed before
>> the sensitivity converges? Testing all combination ranges is a little
>> tedious on my laptop.
>>
>> Thanks!
>>
>> --
>> Lance Norskog
>> [email protected]
>>
>

Reply via email to