zero323 commented on a change in pull request #26247: [SPARK-29566][ML] Imputer 
should support single-column input/output
URL: https://github.com/apache/spark/pull/26247#discussion_r366101391
 
 

 ##########
 File path: mllib/src/main/scala/org/apache/spark/ml/feature/Imputer.scala
 ##########
 @@ -205,6 +227,14 @@ class ImputerModel private[ml] (
 
   import ImputerModel._
 
+  /** @group setParam */
+  @Since("3.0.0")
+  def setInputCol(value: String): this.type = set(inputCol, value)
 
 Review comment:
   > @zero323 There is a check on scala side to make sure only 
`setInputCol/setOutputCol` or `setInputCols/setOutputCols` is set
   
   That's is what confuses me.  Let's say the workflow looks like this:
   
   ```
   import org.apache.spark.ml.feature.Imputer
   
   val df = Seq((1, 2)).toDF("x1", "x2")
   
   val mm = new Imputer()
      .setInputCols(Array("x1", "x2"))
      .setOutputCols(Array("x1_", "x2_"))
      .fit(df)
   ```
   
   You cannot switch to single `col` at the model level:
   
   ```
   mm.setInputCol("x1").setOutputCol("x1_").transform(df)
   
   // java.lang.IllegalArgumentException: requirement failed: ImputerModel 
ImputerModel: uid=imputer_5923f59d0d3a, strategy=mean, missingValue=NaN, 
numInputCols=2, numOutputCols=2 requires exactly one of inputCol, inputCols 
Params to be set, but both are set.
   ```
   
   without clearing `cols` explicitly:
   
   ```
   mm.clear(mm.inputCols).clear(mm.outputCols).transform(df)
   ```
   
   That's really not intuitive workflow, if this is what was intended.
   
   If we only want to support `Imupter.setInputCol` -> 
`ImputerModel.setInputcol`, then there is no point in having this method at all:
   
   ```
   val ms = new Imputer().setInputCol("x1").setOutputCol("x1_").fit(df)
   
   // ms.setInputCol("x2").setOutputCol("x2_").transform(df)
   org.apache.spark.sql.AnalysisException: cannot resolve '`x2`' given input 
columns: [x1];;
   
   ```
   
   as surrogate contains only the column used for fit
   
   ```
   scala> ms.surrogateDF
   res13: org.apache.spark.sql.DataFrame = [x1: double]
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to