Briefly:

clf = GridSearchCV
<http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV>(estimator=svr,
param_grid=p_grid, cv=inner_cv)nested_score = cross_val_score
<http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html#sklearn.model_selection.cross_val_score>(clf,
X=X_iris, y=y_iris, cv=outer_cv)


Each train/test split in cross_val_score holds out test data. GridSearchCV
then splits each train set into (inner-)train and validation sets. There is
no leakage of test set knowledge from the outer loop into the grid search
optimisation; no leakage of validation set knowledge into the SVR
optimisation. The outer test data are reused as training data, but within
each split are only used to measure generalisation error.

Is that clear?

On 29 November 2016 at 10:30, Daniel Homola <dani.hom...@gmail.com> wrote:

> Dear all,
>
>
> I was wondering if the following example code is valid:
>
> http://scikit-learn.org/stable/auto_examples/model_selection
> /plot_nested_cross_validation_iris.html
>
> My understanding is, that the point of nested cross-validation is to
> prevent any data leakage from the inner grid-search/param optimization CV
> loop into the outer model evaluation CV loop. This could be achieved if the
> outer CV loop's test data is completely separated from the inner loop's CV,
> as shown here:
>
> https://mlr-org.github.io/mlr-tutorial/release/html/img/nest
> ed_resampling.png
>
>
> The code in the above example however doesn't seem to achieve this in any
> way.
>
>
> Am I missing something here?
>
>
> Thanks a lot,
>
> dh
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to