[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/11601


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-08 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104890651
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols contains" +
+  s" duplicates: (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols contains" +
+  s" duplicates: (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-08 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104889721
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols contains" +
+  s" duplicates: (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols contains" +
+  s" duplicates: (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-08 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104889417
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols contains" +
+  s" duplicates: (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols contains" +
+  s" duplicates: (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-07 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104638951
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -99,7 +98,8 @@ private[feature] trait ImputerParams extends Params with 
HasInputCols {
  * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
  *
  * Note that the mean/median value is computed after filtering out missing 
values.
- * All Null values in the input column are treated as missing, and so are 
also imputed.
+ * All Null values in the input column are treated as missing, and so are 
also imputed. For
+ * computing median, DataFrameStatFunctions.approxQuantile is used with a 
relative error of 0.001.
--- End diff --

Ah I see it is here - nevermind


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-07 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104638806
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
+  s" (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
--- End diff --

Ah right - perhaps just mention using approxQuantile?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104524697
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
+  s" (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
--- End diff --

I didn't add the link as it may break java doc generation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104516526
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
+  s" (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104410545
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
+  s" (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104407441
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 4.0, 1.0, 1.0, 4.0, 4.0),
+  (1, 11.0, 12.0, 11.0, 11.0, 12.0, 12.0),
+  (2, 3.0, Double.NaN, 3.0, 3.0, 10.0, 12.0),
+  (3, Double.NaN, 14.0, 5.0, 3.0, 14.0, 14.0)
+)).toDF("id", "value1", "value2", "expected_mean_value1", 
"expected_median_value1",
+  "expected_mean_value2", "expected_median_value2")
+val imputer = new Imputer()
+  .setInputCols(Array("value1", "value2"))
+  .setOutputCols(Array("out1", "out2"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val rawDf = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "rawValue", "expected_mean_value", 
"expected_median_value")
+val df = rawDf.selectExpr("*", "IF(rawValue=-1.0, null, rawValue) as 
value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
+val model = imputer.fit(df)
+  }
+}
+  }
+
+  test("Imputer throws exception when inputCols does not match 
outputCols") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value1", "value2", "value3")
+Seq("mean", "median").foreach { strategy 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104407571
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
--- End diff --

perhaps `... inputCols contains duplicates ...`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104407942
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 4.0, 1.0, 1.0, 4.0, 4.0),
+  (1, 11.0, 12.0, 11.0, 11.0, 12.0, 12.0),
+  (2, 3.0, Double.NaN, 3.0, 3.0, 10.0, 12.0),
+  (3, Double.NaN, 14.0, 5.0, 3.0, 14.0, 14.0)
+)).toDF("id", "value1", "value2", "expected_mean_value1", 
"expected_median_value1",
+  "expected_mean_value2", "expected_median_value2")
+val imputer = new Imputer()
+  .setInputCols(Array("value1", "value2"))
+  .setOutputCols(Array("out1", "out2"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val rawDf = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "rawValue", "expected_mean_value", 
"expected_median_value")
+val df = rawDf.selectExpr("*", "IF(rawValue=-1.0, null, rawValue) as 
value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
+val model = imputer.fit(df)
+  }
+}
+  }
+
+  test("Imputer throws exception when inputCols does not match 
outputCols") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value1", "value2", "value3")
+Seq("mean", "median").foreach { strategy 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104404523
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
+  s" (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104409770
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 4.0, 1.0, 1.0, 4.0, 4.0),
+  (1, 11.0, 12.0, 11.0, 11.0, 12.0, 12.0),
+  (2, 3.0, Double.NaN, 3.0, 3.0, 10.0, 12.0),
+  (3, Double.NaN, 14.0, 5.0, 3.0, 14.0, 14.0)
+)).toDF("id", "value1", "value2", "expected_mean_value1", 
"expected_median_value1",
+  "expected_mean_value2", "expected_median_value2")
+val imputer = new Imputer()
+  .setInputCols(Array("value1", "value2"))
+  .setOutputCols(Array("out1", "out2"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val rawDf = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "rawValue", "expected_mean_value", 
"expected_median_value")
+val df = rawDf.selectExpr("*", "IF(rawValue=-1.0, null, rawValue) as 
value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
+val model = imputer.fit(df)
+  }
+}
+  }
+
+  test("Imputer throws exception when inputCols does not match 
outputCols") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value1", "value2", "value3")
+Seq("mean", "median").foreach { strategy 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104407603
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
--- End diff --

perhaps `... outputCols contains duplicates ...`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104408351
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 4.0, 1.0, 1.0, 4.0, 4.0),
+  (1, 11.0, 12.0, 11.0, 11.0, 12.0, 12.0),
+  (2, 3.0, Double.NaN, 3.0, 3.0, 10.0, 12.0),
+  (3, Double.NaN, 14.0, 5.0, 3.0, 14.0, 14.0)
+)).toDF("id", "value1", "value2", "expected_mean_value1", 
"expected_median_value1",
+  "expected_mean_value2", "expected_median_value2")
+val imputer = new Imputer()
+  .setInputCols(Array("value1", "value2"))
+  .setOutputCols(Array("out1", "out2"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val rawDf = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "rawValue", "expected_mean_value", 
"expected_median_value")
+val df = rawDf.selectExpr("*", "IF(rawValue=-1.0, null, rawValue) as 
value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
+val model = imputer.fit(df)
+  }
+}
+  }
+
+  test("Imputer throws exception when inputCols does not match 
outputCols") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value1", "value2", "value3")
+Seq("mean", "median").foreach { strategy 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104408310
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 1.0, 1.0, 1.0),
+  (2, 3.0, 3.0, 3.0),
+  (3, 4.0, 4.0, 4.0),
+  (4, Double.NaN, 2.25, 1.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val df = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val df2 = df.selectExpr("*", "IF(value=-1.0, null, value) as 
nullable_value")
+val imputer = new 
Imputer().setInputCols(Array("nullable_value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df2)
+  }
+
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
+val model = imputer.fit(df)
+  }
+}
+  }
+
+  test("Imputer read/write") {
+val t = new Imputer()
+  .setInputCols(Array("myInputCol"))
+  .setOutputCols(Array("myOutputCol"))
+  .setMissingValue(-1.0)
+testDefaultReadWrite(t)
+  }
+
+  test("ImputerModel read/write") {
+val spark = this.spark
+import spark.implicits._
+val surrogateDF = Seq(1.234).toDF("myInputCol")
--- End diff --

Ok - we should add a test here to check the column names of `instance` and 
`newInstance` match up? (The below check is just for 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104404137
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
+  s" (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
--- End diff --

As mentioned above at 
https://github.com/apache/spark/pull/11601/files#r104403880, you can add the 
note about relative error here.

Something like "For computing median, `approxQuantile` is used with a 
relative error of X" (provide a ScalaDoc link 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104404339
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
+  s" (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104409545
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 4.0, 1.0, 1.0, 4.0, 4.0),
+  (1, 11.0, 12.0, 11.0, 11.0, 12.0, 12.0),
+  (2, 3.0, Double.NaN, 3.0, 3.0, 10.0, 12.0),
+  (3, Double.NaN, 14.0, 5.0, 3.0, 14.0, 14.0)
+)).toDF("id", "value1", "value2", "expected_mean_value1", 
"expected_median_value1",
+  "expected_mean_value2", "expected_median_value2")
+val imputer = new Imputer()
+  .setInputCols(Array("value1", "value2"))
+  .setOutputCols(Array("out1", "out2"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val rawDf = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "rawValue", "expected_mean_value", 
"expected_median_value")
+val df = rawDf.selectExpr("*", "IF(rawValue=-1.0, null, rawValue) as 
value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
--- End diff --

Check message here also.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104406006
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
s"strategy for imputation. " +
+s"If ${Imputer.mean}, then replace missing values using the mean value 
of the feature. " +
+s"If ${Imputer.median}, then replace missing values using the median 
value of the feature.",
+ParamValidators.inArray[String](Array(Imputer.mean, Imputer.median)))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(inputCols).distinct.length, 
s"inputCols duplicates:" +
+  s" (${$(inputCols).mkString(", ")})")
+require($(outputCols).length == $(outputCols).distinct.length, 
s"outputCols duplicates:" +
+  s" (${$(outputCols).mkString(", ")})")
+require($(inputCols).length == $(outputCols).length, 
s"inputCols(${$(inputCols).length})" +
+  s" and outputCols(${$(outputCols).length}) should have the same 
length")
+val outputFields = $(inputCols).zip($(outputCols)).map { case 
(inputCol, outputCol) =>
+  val inputField = schema(inputCol)
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  StructField(outputCol, inputField.dataType, inputField.nullable)
+}
+StructType(schema ++ outputFields)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104407859
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 4.0, 1.0, 1.0, 4.0, 4.0),
+  (1, 11.0, 12.0, 11.0, 11.0, 12.0, 12.0),
+  (2, 3.0, Double.NaN, 3.0, 3.0, 10.0, 12.0),
+  (3, Double.NaN, 14.0, 5.0, 3.0, 14.0, 14.0)
+)).toDF("id", "value1", "value2", "expected_mean_value1", 
"expected_median_value1",
+  "expected_mean_value2", "expected_median_value2")
+val imputer = new Imputer()
+  .setInputCols(Array("value1", "value2"))
+  .setOutputCols(Array("out1", "out2"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val rawDf = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "rawValue", "expected_mean_value", 
"expected_median_value")
+val df = rawDf.selectExpr("*", "IF(rawValue=-1.0, null, rawValue) as 
value")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean_value", "expected_median_value")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
+val model = imputer.fit(df)
+  }
+}
+  }
+
+  test("Imputer throws exception when inputCols does not match 
outputCols") {
--- End diff --

Maybe call the test "Imputer input & output column validation" as it covers 
more than testing matching lengths.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-06 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104403880
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.HasInputCols
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the
+   * feature (relative error less than 0.001).
--- End diff --

I think remove the part `(relative error less than 0.001)`.

This can be moved to the overall ScalaDoc for `Imputer` at L95.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-03 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104280679
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 1.0, 1.0, 1.0),
+  (2, 3.0, 3.0, 3.0),
+  (3, 4.0, 4.0, 4.0),
+  (4, Double.NaN, 2.25, 1.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val df = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val df2 = df.selectExpr("*", "IF(value=-1.0, null, value) as 
nullable_value")
+val imputer = new 
Imputer().setInputCols(Array("nullable_value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df2)
+  }
+
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
+val model = imputer.fit(df)
+  }
+}
+  }
+
+  test("Imputer read/write") {
+val t = new Imputer()
+  .setInputCols(Array("myInputCol"))
+  .setOutputCols(Array("myOutputCol"))
+  .setMissingValue(-1.0)
+testDefaultReadWrite(t)
+  }
+
+  test("ImputerModel read/write") {
+val spark = this.spark
+import spark.implicits._
+val surrogateDF = Seq(1.234).toDF("myInputCol")
--- End diff --

this happens to be the correct column name for now.


---
If your project is set up for it, you can reply to this email and have 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-03 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104258573
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-03 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104258382
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-03 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104257956
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-03 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104257857
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-03 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r104257741
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
--- End diff --

Sure, however I didn't get your first comment. Do you mean we should remove 
the import?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103871091
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103868238
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
--- End diff --

We don't use `HasOutputCol` anymore, correct?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103888377
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103872555
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103867980
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
--- End diff --

Fix comment indentation here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103888244
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103870352
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103870533
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103873813
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
--- End diff --

We need tests for multiple columns too


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103869621
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103878058
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103868010
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
--- End diff --

Not applicable anymore as it's used below now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103868184
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
--- End diff --

All 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103870475
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103886487
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
+  val inputCol = localInputCols(i)
+  val outputCol = localOutputCols(i)
+  val inputType = schema(inputCol).dataType
+  SchemaUtils.checkColumnTypes(schema, inputCol, Seq(DoubleType, 
FloatType))
+  outputSchema = SchemaUtils.appendColumn(outputSchema, outputCol, 
inputType)
+}
+outputSchema
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103874503
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.{SparkException, SparkFunSuite}
+import org.apache.spark.ml.util.DefaultReadWriteTest
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.{DataFrame, Row}
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 1.0, 1.0, 1.0),
+  (2, 3.0, 3.0, 3.0),
+  (3, 4.0, 4.0, 4.0),
+  (4, Double.NaN, 2.25, 1.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1.0)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+  .setMissingValue(-1)
+ImputerSuite.iterateStrategyTest(imputer, df)
+  }
+
+  test("Imputer should impute null as well as 'missingValue'") {
+val df = spark.createDataFrame( Seq(
+  (0, 4.0, 4.0, 4.0),
+  (1, 10.0, 10.0, 10.0),
+  (2, 10.0, 10.0, 10.0),
+  (3, Double.NaN, 8.0, 10.0),
+  (4, -1.0, 8.0, 10.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+val df2 = df.selectExpr("*", "IF(value=-1.0, null, value) as 
nullable_value")
+val imputer = new 
Imputer().setInputCols(Array("nullable_value")).setOutputCols(Array("out"))
+ImputerSuite.iterateStrategyTest(imputer, df2)
+  }
+
+
+  test("Imputer throws exception when surrogate cannot be computed") {
+val df = spark.createDataFrame( Seq(
+  (0, Double.NaN, 1.0, 1.0),
+  (1, Double.NaN, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCols(Array("value")).setOutputCols(Array("out"))
+.setStrategy(strategy)
+  intercept[SparkException] {
+val model = imputer.fit(df)
+  }
+}
+  }
+
+  test("Imputer read/write") {
+val t = new Imputer()
+  .setInputCols(Array("myInputCol"))
+  .setOutputCols(Array("myOutputCol"))
+  .setMissingValue(-1.0)
+testDefaultReadWrite(t)
+  }
+
+  test("ImputerModel read/write") {
+val spark = this.spark
+import spark.implicits._
+val surrogateDF = Seq(1.234).toDF("myInputCol")
--- End diff --

This should be "surrogate" col name - though I see we don't actually use it 
in load or transform


---
If your project is set up 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-03-02 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r103878625
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCols with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+/**
+   * Param for output column names.
+   * @group param
+   */
+  final val outputCols: StringArrayParam = new StringArrayParam(this, 
"outputCols",
+"output column names")
+
+  /** @group getParam */
+  final def getOutputCols: Array[String] = $(outputCols)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+require($(inputCols).length == $(outputCols).length, "inputCols and 
outputCols should have" +
+  "the same length")
+val localInputCols = $(inputCols)
+val localOutputCols = $(outputCols)
+var outputSchema = schema
+
+$(inputCols).indices.foreach { i =>
--- End diff --

Can do `$(inputCols).zip($(outputCols)).foreach { case (inputCol, 
outputCol) => ...`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-02-20 Thread hhbyyh
Github user hhbyyh commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r102141627
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,225 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] with ImputerParams with 
DefaultParamsWritable {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("imputer"))
+
+  /** @group setParam */
+  @Since("2.1.0")
+  def setInputCol(value: String): this.type = set(inputCol, value)
+
+  /** @group setParam */
+  @Since("2.1.0")
+  def setOutputCol(value: String): this.type = set(outputCol, value)
+
+  /**
+   * Imputation strategy. Available options are ["mean", "median"].
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setStrategy(value: String): this.type = set(strategy, value)
+
+  /** @group setParam */
+  @Since("2.1.0")
+  def setMissingValue(value: Double): this.type = set(missingValue, value)
+

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2017-02-14 Thread ChristopheDuong
Github user ChristopheDuong commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r101032949
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,225 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Note that null values are always treated as missing.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType. Currently Imputer does not support categorical 
features yet
+ * (SPARK-15041) and possibly creates incorrect values for a categorical 
feature.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] with ImputerParams with 
DefaultParamsWritable {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("imputer"))
+
+  /** @group setParam */
+  @Since("2.1.0")
+  def setInputCol(value: String): this.type = set(inputCol, value)
+
+  /** @group setParam */
+  @Since("2.1.0")
+  def setOutputCol(value: String): this.type = set(outputCol, value)
+
+  /**
+   * Imputation strategy. Available options are ["mean", "median"].
+   * @group setParam
+   */
+  @Since("2.1.0")
+  def setStrategy(value: String): this.type = set(strategy, value)
+
+  /** @group setParam */
+  @Since("2.1.0")
+  def setMissingValue(value: Double): this.type = set(missingValue, value)
 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-27 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80644244
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+require(!schema.fieldNames.contains($(outputCol)),
+  s"Output column ${$(outputCol)} already exists.")
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] with ImputerParams with 
DefaultParamsWritable {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("imputer"))
+
+  /** @group setParam */
--- End diff --

I've heard an argument that everything in the class is implicitly since 
2.1.0 since the class itself is - unless otherwise stated. Which does make 
sense. But I do slightly favour being explicit about it (even if it is a bit 
pedantic) so yeah let's add the annotation to all the setters.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-27 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80644112
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+require(!schema.fieldNames.contains($(outputCol)),
+  s"Output column ${$(outputCol)} already exists.")
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] with ImputerParams with 
DefaultParamsWritable {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("imputer"))
+
+  /** @group setParam */
+  def setInputCol(value: String): this.type = set(inputCol, value)
+
+  /** @group setParam */
+  def setOutputCol(value: String): this.type = set(outputCol, value)
+
+  /**
+   * Imputation strategy. Available options are ["mean", "median"].
+   * @group setParam
+   */
+  def setStrategy(value: String): this.type = set(strategy, value)
+
+  /** @group setParam */
+  def setMissingValue(value: Double): this.type = set(missingValue, value)
+
+  setDefault(strategy -> "mean", missingValue -> Double.NaN)
+
+  override def fit(dataset: Dataset[_]): ImputerModel = {
+transformSchema(dataset.schema, logging = true)
+val ic = col($(inputCol))

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80592740
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
--- End diff --

Doc: Note that null values are always treated as missing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80592752
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+require(!schema.fieldNames.contains($(outputCol)),
--- End diff --

This is already checked in appendColumn


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80594424
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+require(!schema.fieldNames.contains($(outputCol)),
+  s"Output column ${$(outputCol)} already exists.")
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] with ImputerParams with 
DefaultParamsWritable {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("imputer"))
+
+  /** @group setParam */
+  def setInputCol(value: String): this.type = set(inputCol, value)
+
+  /** @group setParam */
+  def setOutputCol(value: String): this.type = set(outputCol, value)
+
+  /**
+   * Imputation strategy. Available options are ["mean", "median"].
+   * @group setParam
+   */
+  def setStrategy(value: String): this.type = set(strategy, value)
+
+  /** @group setParam */
+  def setMissingValue(value: Double): this.type = set(missingValue, value)
+
+  setDefault(strategy -> "mean", missingValue -> Double.NaN)
+
+  override def fit(dataset: Dataset[_]): ImputerModel = {
+transformSchema(dataset.schema, logging = true)
+val ic = 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80595926
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.ml.util.{DefaultReadWriteTest}
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.Row
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 1.0, 1.0, 1.0),
+  (2, 3.0, 3.0, 3.0),
+  (3, 4.0, 4.0, 4.0),
+  (4, Double.NaN, 2.25, 1.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCol("value").setOutputCol("out").setStrategy(strategy)
+  val model = imputer.fit(df)
+  model.transform(df).select("expected_" + strategy, 
"out").collect().foreach {
+   case Row(exp: Double, out: Double) =>
+  assert(exp ~== out absTol 1e-5, s"Imputed values differ. 
Expected: $exp, actual: $out")
+  }
+}
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
--- End diff --

This basic logic could be reused across the unit tests comparing actual and 
expected results.  I'd recommend extracting this foreach into a method which 
can be called for each of the tests in this suite.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80593819
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+require(!schema.fieldNames.contains($(outputCol)),
+  s"Output column ${$(outputCol)} already exists.")
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] with ImputerParams with 
DefaultParamsWritable {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("imputer"))
+
+  /** @group setParam */
--- End diff --

Shall we add Since annotations for the setters?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80593061
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+require(!schema.fieldNames.contains($(outputCol)),
+  s"Output column ${$(outputCol)} already exists.")
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
--- End diff --

Say here that this does not support categorical features yet and will 
transform them, possibly creating incorrect values for a categorical feature.  
Also add JIRA number for supporting them.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80594974
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+require(!schema.fieldNames.contains($(outputCol)),
+  s"Output column ${$(outputCol)} already exists.")
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] with ImputerParams with 
DefaultParamsWritable {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("imputer"))
+
+  /** @group setParam */
+  def setInputCol(value: String): this.type = set(inputCol, value)
+
+  /** @group setParam */
+  def setOutputCol(value: String): this.type = set(outputCol, value)
+
+  /**
+   * Imputation strategy. Available options are ["mean", "median"].
+   * @group setParam
+   */
+  def setStrategy(value: String): this.type = set(strategy, value)
+
+  /** @group setParam */
+  def setMissingValue(value: Double): this.type = set(missingValue, value)
+
+  setDefault(strategy -> "mean", missingValue -> Double.NaN)
+
+  override def fit(dataset: Dataset[_]): ImputerModel = {
+transformSchema(dataset.schema, logging = true)
+val ic = 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80593916
  
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala ---
@@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.{Estimator, Model}
+import org.apache.spark.ml.param._
+import org.apache.spark.ml.param.shared.{HasInputCol, HasOutputCol}
+import org.apache.spark.ml.util._
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+
+/**
+ * Params for [[Imputer]] and [[ImputerModel]].
+ */
+private[feature] trait ImputerParams extends Params with HasInputCol with 
HasOutputCol {
+
+  /**
+   * The imputation strategy.
+   * If "mean", then replace missing values using the mean value of the 
feature.
+   * If "median", then replace missing values using the approximate median 
value of the feature.
+   * Default: mean
+   *
+   * @group param
+   */
+  final val strategy: Param[String] = new Param(this, "strategy", 
"strategy for imputation. " +
+"If mean, then replace missing values using the mean value of the 
feature. " +
+"If median, then replace missing values using the median value of the 
feature.",
+
ParamValidators.inArray[String](Imputer.supportedStrategyNames.toArray))
+
+  /** @group getParam */
+  def getStrategy: String = $(strategy)
+
+  /**
+   * The placeholder for the missing values. All occurrences of 
missingValue will be imputed.
+   * Default: Double.NaN
+   *
+   * @group param
+   */
+  final val missingValue: DoubleParam = new DoubleParam(this, 
"missingValue",
+"The placeholder for the missing values. All occurrences of 
missingValue will be imputed")
+
+  /** @group getParam */
+  def getMissingValue: Double = $(missingValue)
+
+  /** Validates and transforms the input schema. */
+  protected def validateAndTransformSchema(schema: StructType): StructType 
= {
+val inputType = schema($(inputCol)).dataType
+SchemaUtils.checkColumnTypes(schema, $(inputCol), Seq(DoubleType, 
FloatType))
+require(!schema.fieldNames.contains($(outputCol)),
+  s"Output column ${$(outputCol)} already exists.")
+SchemaUtils.appendColumn(schema, $(outputCol), inputType)
+  }
+}
+
+/**
+ * :: Experimental ::
+ * Imputation estimator for completing missing values, either using the 
mean or the median
+ * of the column in which the missing values are located. The input column 
should be of
+ * DoubleType or FloatType.
+ *
+ * Note that the mean/median value is computed after filtering out missing 
values.
+ * All Null values in the input column are treated as missing, and so are 
also imputed.
+ */
+@Experimental
+class Imputer @Since("2.1.0")(override val uid: String)
+  extends Estimator[ImputerModel] with ImputerParams with 
DefaultParamsWritable {
+
+  @Since("2.1.0")
+  def this() = this(Identifiable.randomUID("imputer"))
+
+  /** @group setParam */
+  def setInputCol(value: String): this.type = set(inputCol, value)
+
+  /** @group setParam */
+  def setOutputCol(value: String): this.type = set(outputCol, value)
+
+  /**
+   * Imputation strategy. Available options are ["mean", "median"].
+   * @group setParam
+   */
+  def setStrategy(value: String): this.type = set(strategy, value)
+
+  /** @group setParam */
+  def setMissingValue(value: Double): this.type = set(missingValue, value)
+
+  setDefault(strategy -> "mean", missingValue -> Double.NaN)
+
+  override def fit(dataset: Dataset[_]): ImputerModel = {
+transformSchema(dataset.schema, logging = true)
+val ic = 

[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80543997
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.ml.util.{DefaultReadWriteTest}
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.Row
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 1.0, 1.0, 1.0),
+  (2, 3.0, 3.0, 3.0),
+  (3, 4.0, 4.0, 4.0),
+  (4, Double.NaN, 2.25, 1.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCol("value").setOutputCol("out").setStrategy(strategy)
+  val model = imputer.fit(df)
+  model.transform(df).select("expected_" + strategy, 
"out").collect().foreach {
+   case Row(exp: Double, out: Double) =>
+  assert(exp ~== out absTol 1e-5, s"Imputed values differ. 
Expected: $exp, actual: $out")
+  }
+}
+  }
+
--- End diff --

Yeah, actually this also fails if the entire input column is the missing 
value as well. We need to beef up the test suite :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80518709
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.ml.util.{DefaultReadWriteTest}
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.Row
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 1.0, 1.0, 1.0),
+  (2, 3.0, 3.0, 3.0),
+  (3, 4.0, 4.0, 4.0),
+  (4, Double.NaN, 2.25, 1.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCol("value").setOutputCol("out").setStrategy(strategy)
+  val model = imputer.fit(df)
+  model.transform(df).select("expected_" + strategy, 
"out").collect().foreach {
+   case Row(exp: Double, out: Double) =>
+  assert(exp ~== out absTol 1e-5, s"Imputed values differ. 
Expected: $exp, actual: $out")
+  }
+}
+  }
+
+  test("Imputer should handle NaNs when computing surrogate value, if 
missingValue is not NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 3.0, 3.0, 3.0),
+  (2, Double.NaN, Double.NaN, Double.NaN),
+  (3, -1.0, 2.0, 3.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCol("value").setOutputCol("out").setStrategy(strategy)
+.setMissingValue(-1.0)
+  val model = imputer.fit(df)
+  model.transform(df).select("expected_" + strategy, 
"out").collect().foreach {
+case Row(exp: Double, out: Double) =>
+  assert((exp.isNaN && out.isNaN) || (exp ~== out absTol 1e-5),
+s"Imputed values differ. Expected: $exp, actual: $out")
+  }
+}
+  }
+
+  test("Imputer for Float with missing Value -1.0") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0F, 1.0F, 1.0F),
+  (1, 3.0F, 3.0F, 3.0F),
+  (2, 10.0F, 10.0F, 10.0F),
+  (3, 10.0F, 10.0F, 10.0F),
+  (4, -1.0F, 6.0F, 3.0F)
+)).toDF("id", "value", "expected_mean", "expected_median")
+
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCol("value").setOutputCol("out").setStrategy(strategy)
+.setMissingValue(-1)
+  val model = imputer.fit(df)
+  val result = model.transform(df)
--- End diff --

this is never used.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80488125
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.ml.util.{DefaultReadWriteTest}
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.Row
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 1.0, 1.0, 1.0),
+  (2, 3.0, 3.0, 3.0),
+  (3, 4.0, 4.0, 4.0),
+  (4, Double.NaN, 2.25, 1.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCol("value").setOutputCol("out").setStrategy(strategy)
+  val model = imputer.fit(df)
+  model.transform(df).select("expected_" + strategy, 
"out").collect().foreach {
+   case Row(exp: Double, out: Double) =>
+  assert(exp ~== out absTol 1e-5, s"Imputed values differ. 
Expected: $exp, actual: $out")
+  }
+}
+  }
+
--- End diff --

Good catch yes - obviously the imputer can't actually do anything useful in 
that case - but it should either throw a useful error, or return the dataset 
unchanged.

I would favor an error in this case as if a user is explicitly wanting to 
impute missing data and all their data is missing, rather blow up now than 
later in the pipeline.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11601: [SPARK-13568] [ML] Create feature transformer to ...

2016-09-26 Thread sethah
Github user sethah commented on a diff in the pull request:

https://github.com/apache/spark/pull/11601#discussion_r80484748
  
--- Diff: 
mllib/src/test/scala/org/apache/spark/ml/feature/ImputerSuite.scala ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.ml.feature
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.ml.util.{DefaultReadWriteTest}
+import org.apache.spark.mllib.util.MLlibTestSparkContext
+import org.apache.spark.mllib.util.TestingUtils._
+import org.apache.spark.sql.Row
+
+class ImputerSuite extends SparkFunSuite with MLlibTestSparkContext with 
DefaultReadWriteTest {
+
+  test("Imputer for Double with default missing Value NaN") {
+val df = spark.createDataFrame( Seq(
+  (0, 1.0, 1.0, 1.0),
+  (1, 1.0, 1.0, 1.0),
+  (2, 3.0, 3.0, 3.0),
+  (3, 4.0, 4.0, 4.0),
+  (4, Double.NaN, 2.25, 1.0)
+)).toDF("id", "value", "expected_mean", "expected_median")
+Seq("mean", "median").foreach { strategy =>
+  val imputer = new 
Imputer().setInputCol("value").setOutputCol("out").setStrategy(strategy)
+  val model = imputer.fit(df)
+  model.transform(df).select("expected_" + strategy, 
"out").collect().foreach {
+   case Row(exp: Double, out: Double) =>
+  assert(exp ~== out absTol 1e-5, s"Imputed values differ. 
Expected: $exp, actual: $out")
+  }
+}
+  }
+
--- End diff --

We need to add tests for the case where the entire column is `null` or 
`NaN`. I just checked the `NaN` case and it will throw a NPE in the fit method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org