[
https://issues.apache.org/jira/browse/SPARK-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260399#comment-15260399
]
Seth Hendrickson commented on SPARK-8971:
-----------------------------------------
I've got an improved version of the original PR which requires only a single
pass through the data for computing multiple splits at once, but it utilizes
{{PairRDDFunctions}} and is not implemented for the dataframe API. If this is
ok (i.e. we implement for RDDs initially), then what remains is the semantics
of the API for {{TrainValidationSplit}} and {{CrossValidator}}. Specific
questions I have:
* Should users be able to specify which column to stratify on or should it
default to the output column always?
* Should the stratified splitting always be exact? If all the key weights are
the same and we don't do exact sampling, then the problem of having some
stratums without a given class still exists. However, if we expose a way for
users to specify different key weights per stratum, then there is added
functionality.
* Should we expose a way for users to specify key weights per stratum? For
example, with labels 0 and 1 should a user be able to say I want these splits:
split0 = (0 -> 0.2, 1 -> 0.4), split1 = (0 -> 0.8, 1 -> 0.6) ? I don't think it
makes sense, since this would override the {{trainRatio}} parameter. For this
reason, I think we should always use exact stratified sampling.
* Should users have a way to specify the keys in the stratified column? If not,
we require a pass through the data to collect distinct values.
Some example API designs:
* have a {{useStratifiedSampling}} boolean parameter that calls stratified
sampling using the output column when true.
* have a {{stratifiedCol}} string parameter that calls stratified sampling
using the specified column when set.
* similar to above, but add a way to specify the stratified key values
I'd really appreciate any feedback about the design and if we want to continue
this PR in the RDD API. cc [~mlnick] [~josephkb]
> Support balanced class labels when splitting train/cross validation sets
> ------------------------------------------------------------------------
>
> Key: SPARK-8971
> URL: https://issues.apache.org/jira/browse/SPARK-8971
> Project: Spark
> Issue Type: New Feature
> Components: ML
> Reporter: Feynman Liang
> Assignee: Seth Hendrickson
>
> {{CrossValidator}} and the proposed {{TrainValidatorSplit}} (SPARK-8484) are
> Spark classes which partition data into training and evaluation sets for
> performing hyperparameter selection via cross validation.
> Both methods currently perform the split by randomly sampling the datasets.
> However, when class probabilities are highly imbalanced (e.g. detection of
> extremely low-frequency events), random sampling may result in cross
> validation sets not representative of actual out-of-training performance
> (e.g. no positive training examples could be included).
> Mainstream R packages like already
> [caret|http://topepo.github.io/caret/splitting.html] support splitting the
> data based upon the class labels.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]