[
https://issues.apache.org/jira/browse/SPARK-19714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen reassigned SPARK-19714:
---------------------------------
Assignee: Wojciech Szymanski
Priority: Minor (was: Major)
Issue Type: Improvement (was: Bug)
Summary: Clarify Bucketizer handling of invalid input (was: Bucketizer
Bug Regarding Handling Unbucketed Inputs)
> Clarify Bucketizer handling of invalid input
> --------------------------------------------
>
> Key: SPARK-19714
> URL: https://issues.apache.org/jira/browse/SPARK-19714
> Project: Spark
> Issue Type: Improvement
> Components: ML, MLlib
> Affects Versions: 2.1.0
> Reporter: Bill Chambers
> Assignee: Wojciech Szymanski
> Priority: Minor
>
> {code}
> contDF = spark.range(500).selectExpr("cast(id as double) as id")
> import org.apache.spark.ml.feature.Bucketizer
> val splits = Array(5.0, 10.0, 250.0, 500.0)
> val bucketer = new Bucketizer()
> .setSplits(splits)
> .setInputCol("id")
> .setHandleInvalid("skip")
> bucketer.transform(contDF).show()
> {code}
> You would expect that this would handle the invalid buckets. However it fails
> {code}
> Caused by: org.apache.spark.SparkException: Feature value 0.0 out of
> Bucketizer bounds [5.0, 500.0]. Check your features, or loosen the
> lower/upper bound constraints.
> {code}
> It seems strange that handleInvalud doesn't actually handleInvalid inputs.
> Thoughts anyone?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]