[GitHub] spark pull request #15428: [SPARK-17219][ML] enchanced NaN value handling in...

2016-10-13 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/15428#discussion_r83181476
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Bucketizer.scala 
---
@@ -73,15 +78,27 @@ final class Bucketizer @Since("1.4.0") (@Since("1.4.0") 
override val uid: String
   @Since("1.4.0")
   def setOutputCol(value: String): this.type = set(outputCol, value)
 
+  /** @group setParam */
+  @Since("2.1.0")
+  def setHandleInvalid(value: String): this.type = set(handleInvalid, 
value)
+  setDefault(handleInvalid, "error")
+
   @Since("2.0.0")
   override def transform(dataset: Dataset[_]): DataFrame = {
 transformSchema(dataset.schema)
-val bucketizer = udf { feature: Double =>
-  Bucketizer.binarySearchForBuckets($(splits), feature)
+val bucketizer: UserDefinedFunction = udf { (feature: Double, flag: 
String) =>
+  Bucketizer.binarySearchForBuckets($(splits), feature, flag)
+}
+val filteredDataset = {
--- End diff --

I don't see that the method handles `NaN` below. What binarySearch returns 
is undefined. One place or the other I think this has to be explicitly handled.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15428: [SPARK-17219][ML] enchanced NaN value handling in...

2016-10-11 Thread VinceShieh
Github user VinceShieh commented on a diff in the pull request:

https://github.com/apache/spark/pull/15428#discussion_r82743072
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Bucketizer.scala 
---
@@ -73,15 +78,27 @@ final class Bucketizer @Since("1.4.0") (@Since("1.4.0") 
override val uid: String
   @Since("1.4.0")
   def setOutputCol(value: String): this.type = set(outputCol, value)
 
+  /** @group setParam */
+  @Since("2.1.0")
+  def setHandleInvalid(value: String): this.type = set(handleInvalid, 
value)
+  setDefault(handleInvalid, "error")
+
   @Since("2.0.0")
   override def transform(dataset: Dataset[_]): DataFrame = {
 transformSchema(dataset.schema)
-val bucketizer = udf { feature: Double =>
-  Bucketizer.binarySearchForBuckets($(splits), feature)
+val bucketizer: UserDefinedFunction = udf { (feature: Double, flag: 
String) =>
+  Bucketizer.binarySearchForBuckets($(splits), feature, flag)
+}
+val filteredDataset = {
--- End diff --

Nope, actually, NaN will trigger an error later in binarySearchForBuckets 
as an invalid feature value if no special handling is made.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15428: [SPARK-17219][ML] enchanced NaN value handling in...

2016-10-11 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/15428#discussion_r82741203
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/param/shared/sharedParams.scala ---
@@ -270,10 +270,10 @@ private[ml] trait HasFitIntercept extends Params {
 private[ml] trait HasHandleInvalid extends Params {
 
   /**
-   * Param for how to handle invalid entries. Options are skip (which will 
filter out rows with bad values), or error (which will throw an error). More 
options may be added later.
+   * Param for how to handle invalid entries. Options are skip (which will 
filter out rows with bad values), or error (which will throw an error), or keep 
(which will keep the bad values in certain way). More options may be added 
later.
--- End diff --

I'm neutral on the complexity that this adds, but not against it. It gets a 
little funny to say "keep invalid data" but I think we discussed that on the 
JIRA


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15428: [SPARK-17219][ML] enchanced NaN value handling in...

2016-10-11 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/15428#discussion_r82741770
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Bucketizer.scala 
---
@@ -128,8 +145,9 @@ object Bucketizer extends 
DefaultParamsReadable[Bucketizer] {
* Binary searching in several buckets to place each data point.
* @throws SparkException if a feature is < splits.head or > splits.last
*/
-  private[feature] def binarySearchForBuckets(splits: Array[Double], 
feature: Double): Double = {
-if (feature.isNaN) {
+  private[feature] def binarySearchForBuckets
+  (splits: Array[Double], feature: Double, flag: String): Double = {
--- End diff --

Nit: I think the convention is to leave the open paren on the previous line

Doesn't this need to handle "skip" and "error"? throw an exception on NaN 
if "error" or ignore it if "skip"?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15428: [SPARK-17219][ML] enchanced NaN value handling in...

2016-10-11 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/15428#discussion_r82741459
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Bucketizer.scala 
---
@@ -73,15 +78,27 @@ final class Bucketizer @Since("1.4.0") (@Since("1.4.0") 
override val uid: String
   @Since("1.4.0")
   def setOutputCol(value: String): this.type = set(outputCol, value)
 
+  /** @group setParam */
+  @Since("2.1.0")
+  def setHandleInvalid(value: String): this.type = set(handleInvalid, 
value)
+  setDefault(handleInvalid, "error")
+
   @Since("2.0.0")
   override def transform(dataset: Dataset[_]): DataFrame = {
 transformSchema(dataset.schema)
-val bucketizer = udf { feature: Double =>
-  Bucketizer.binarySearchForBuckets($(splits), feature)
+val bucketizer: UserDefinedFunction = udf { (feature: Double, flag: 
String) =>
+  Bucketizer.binarySearchForBuckets($(splits), feature, flag)
+}
+val filteredDataset = {
--- End diff --

Doesn't this need to try to handle "error"?

```
val filteredDataSet = getHandleInvalid match {
  case "skip" => dataset.na.drop
  case "keep" => dataset
  case "error" => 
if (...dataset contains NaN...) {
  throw new IllegalArgumentException(...)
   } else {
  dataset
   }
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15428: [SPARK-17219][ML] enchanced NaN value handling in...

2016-10-11 Thread VinceShieh
GitHub user VinceShieh opened a pull request:

https://github.com/apache/spark/pull/15428

[SPARK-17219][ML] enchanced NaN value handling in Bucketizer

## What changes were proposed in this pull request?

This PR is an enhancement of PR with commit 
ID:57dc326bd00cf0a49da971e9c573c48ae28acaa2.
NaN is a special type of value which is commonly seen as invalid. But We 
find that there are certain cases where NaN are also valuable, thus need 
special handling. We provided user when dealing NaN values with 3 options, to 
either reserve an extra bucket for NaN values, or remove the NaN values, or 
report an error, by passing "keep", "skip", or "error"(default) to 
setHandleInvalid.

'''Before:
val bucketizer: Bucketizer = new Bucketizer()
  .setInputCol("feature")
  .setOutputCol("result")
  .setSplits(splits)
'''After:
val bucketizer: Bucketizer = new Bucketizer()
  .setInputCol("feature")
  .setOutputCol("result")
  .setSplits(splits)
  .setHandleInvalid("keep")

## How was this patch tested?
Tests added in QuantileDiscretizerSuite and BucketizerSuite

Signed-off-by: VinceShieh 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/VinceShieh/spark spark-17219_followup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/15428.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #15428


commit a3e43086dcf6ecee20461567e2cc506db29f80a7
Author: VinceShieh 
Date:   2016-10-10T02:33:09Z

[SPARK-17219][ML] enchance NaN value handling in Bucketizer

This PR is an enhancement of PR with commit 
ID:57dc326bd00cf0a49da971e9c573c48ae28acaa2.
We provided user when dealing NaN value in the dataset with 3 options, to 
either reserve an extra
bucket for NaN values, or remove the NaN values, or report an error, by 
passing "keep", "skip",
or "error"(default) to setHandleInvalid.

'''Before:
val bucketizer: Bucketizer = new Bucketizer()
  .setInputCol("feature")
  .setOutputCol("result")
  .setSplits(splits)
'''After:
val bucketizer: Bucketizer = new Bucketizer()
  .setInputCol("feature")
  .setOutputCol("result")
  .setSplits(splits)
  .setHandleInvalid("skip")

Signed-off-by: VinceShieh 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org