Yes, you overfit the training data set, so you "under-fit" the test
set. I'm trying to suggest why more degrees of freedom (features)
makes for a "worse" fit. It doesn't, on the training set, but those
same parameters may fit the test set increasingly badly.

It doesn't make sense to evaluate on a training set.

On Thu, May 9, 2013 at 3:21 PM, Gabor Bernat <ber...@primeranks.net> wrote:
> Yes, but overfitting is for train dataset isn't it? However, now I'm
> evaluating on a test dataset (which is sampled from the whole dataset, but
> that still makes it test), so don't really understand how can overfitting
> become an issue. :-?
>
> Is there any class/function to make the evaluation on the train dataset
> instead?

Reply via email to