Regarding the evaluation, I use the leave 20% out cross validation method.
I cannot leave more out because my data sets are very small, between 30 and
40 observations, each one with 600 features. Is there a limit in the number
of MLPRegressors I can combine with stacking considering my small data
sets?

On Jan 7, 2017 23:04, "Joel Nothman" <joel.noth...@gmail.com> wrote:

> *
>
>
>> There is no problem, in general, with overfitting, as long as your
>> evaluation of an estimator's performance isn't biased towards the training
>> set. We've not talked about evaluation.
>>
>
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to