Github user rawkintrevo commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1898#discussion_r63605367
  
    --- Diff: docs/apis/batch/libs/ml/cross_validation.md ---
    @@ -0,0 +1,175 @@
    +---
    +mathjax: include
    +title: Cross Validation
    +
    +# Sub navigation
    +sub-nav-group: batch
    +sub-nav-parent: flinkml
    +sub-nav-title: Cross Validation
    +---
    +<!--
    +Licensed to the Apache Software Foundation (ASF) under one
    +or more contributor license agreements.  See the NOTICE file
    +distributed with this work for additional information
    +regarding copyright ownership.  The ASF licenses this file
    +to you under the Apache License, Version 2.0 (the
    +"License"); you may not use this file except in compliance
    +with the License.  You may obtain a copy of the License at
    +
    +  http://www.apache.org/licenses/LICENSE-2.0
    +
    +Unless required by applicable law or agreed to in writing,
    +software distributed under the License is distributed on an
    +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +KIND, either express or implied.  See the License for the
    +specific language governing permissions and limitations
    +under the License.
    +-->
    +
    +* This will be replaced by the TOC
    +{:toc}
    +
    +## Description
    +
    + A prevalent problem when utilizing machine learning algorithms is 
*overfitting*, or when an algorithm "memorizes" the training data but does a 
poor job extrapolating to out of sample cases. A common method for dealing with 
the overfitting problem is to hold back some subset of data from the original 
training algorithm and then measure the fit algorithm's performance on this 
hold-out set. This is commonly known as *cross validation*.  A model is trained 
on one subset of data and then *validated* on another set of data.
    +
    +## Cross Validation Strategies
    +
    +There are several strategies for holding out data. FlinkML has convenience 
methods for
    +- Train-Test Splits
    +- Train-Test-Holdout Splits
    +- K-Fold Splits
    +- Multi-Random Splits
    +
    +### Train-Test Splits
    +
    +The simplest method of splitting is the `trainTestSplit`. This split takes 
a DataSet and a parameter *fraction*.  The *fraction* indicates the portion of 
the DataSet that should be allocated to the training set. This split also takes 
two additional optional parameters, *precise* and *seed*.  
    +
    +By default, the Split is done by randomly deciding weather or not an 
observation is assigned to the training DataSet with probability = *fraction*.  
When *precise* is `true` however, additional steps are taken to ensure the 
training set is as close as possible to the length of the DataSet  $\cdot$ 
*fraction*.
    +
    +The method returns a new `TrainTestDataSet` object which has a `.training` 
attribute containing the training DataSet and a `.testing` attribute containing 
the testing DataSet.
    +
    +
    +### Train-Test-Holdout Splits
    +
    +In some cases, algorithms have been known to 'learn' the testing set.  To 
combat this issue, a train-test-hold out strategy introduces a secondary 
holdout set, aptly called the *holdout* set.
    +
    +Traditionally, training and testing would be done to train an algorithms 
as normal and then a final test of the algorithm on the holdout set would be 
done.  Ideally, prediction errors/model scores in the holdout set would not be 
significantly different than those observed in the testing set.
    +
    +In a train-test-holdout strategy we sacrifice the sample size of the 
initial fitting algorithm for increased confidence that our model is not 
over-fit.
    +
    +When using `trainTestHoldout` splitter, the *fraction* `Double` is 
replaced by a *fraction* array of length three. The first element coresponds to 
the portion to be used for training, second for testing, and third for holdout. 
 The weights of this array are *relative*, e.g. an array `Array(3.0, 2.0, 1.0)` 
would results in approximately 50% of the observations being in the training 
set, 33% of the observations in the testing set, and 17% of the observations in 
holdout set.
    +
    +### K-Fold Splits
    +
    +In a *k-fold* strategy, the DataSet is split into *k* equal subsets. Then 
for each of the *k* subsets, a `TrainTestDataSet` is created where the subset 
is the `.training` DataSet, and the remaining subsets are the `.testing` set.
    +
    +For each training set, an algorithm is trained and then is evaluated based 
on the predictions based on the assosciated testing set. When an algorithm that 
has consistent grades (e.g. prediction errors) across held out datasets we can 
have some confidence that our approach (e.g. choice of algorithm / algorithm 
parameters / number of iterations) is robust against overfitting.
    +
    +<a 
href="https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation">K-Fold
 Cross Validatation</a>
    --- End diff --
    
    It's been a while since I wrote this, but I vaguely remember having some 
sort of issue either with the build (or just my mark down interpretter) 
specifically on this link, I think it had to do with the parentheses in the 
link? That's why I fell back to html.  Also, I never can get markdown links to 
come out correctly in tables.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to