holdenk commented on a change in pull request #20146: [SPARK-11215][ML] Add
multiple columns support to StringIndexer
URL: https://github.com/apache/spark/pull/20146#discussion_r243724080
##########
File path: mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala
##########
@@ -130,21 +165,69 @@ class StringIndexer @Since("1.4.0") (
@Since("1.4.0")
def setOutputCol(value: String): this.type = set(outputCol, value)
+ /** @group setParam */
+ @Since("3.0.0")
+ def setInputCols(value: Array[String]): this.type = set(inputCols, value)
+
+ /** @group setParam */
+ @Since("3.0.0")
+ def setOutputCols(value: Array[String]): this.type = set(outputCols, value)
+
+ private def countByValue(
+ dataset: Dataset[_],
+ inputCols: Array[String]): Array[OpenHashMap[String, Long]] = {
+
+ val aggregator = new StringIndexerAggregator(inputCols.length)
+ implicit val encoder = Encoders.kryo[Array[OpenHashMap[String, Long]]]
+
+ dataset.select(inputCols.map(col(_).cast(StringType)): _*)
+ .toDF
+ .groupBy().agg(aggregator.toColumn)
+ .as[Array[OpenHashMap[String, Long]]]
+ .collect()(0)
+ }
+
@Since("2.0.0")
override def fit(dataset: Dataset[_]): StringIndexerModel = {
transformSchema(dataset.schema, logging = true)
- val values = dataset.na.drop(Array($(inputCol)))
- .select(col($(inputCol)).cast(StringType))
- .rdd.map(_.getString(0))
- val labels = $(stringOrderType) match {
- case StringIndexer.frequencyDesc =>
values.countByValue().toSeq.sortBy(-_._2)
- .map(_._1).toArray
- case StringIndexer.frequencyAsc =>
values.countByValue().toSeq.sortBy(_._2)
- .map(_._1).toArray
- case StringIndexer.alphabetDesc => values.distinct.collect.sortWith(_ >
_)
- case StringIndexer.alphabetAsc => values.distinct.collect.sortWith(_ < _)
- }
- copyValues(new StringIndexerModel(uid, labels).setParent(this))
+
+ val (inputCols, _) = getInOutCols()
+
+ val filteredDF = dataset.na.drop(inputCols)
Review comment:
Just noticed a possible problem with this, sorry for not catching it sooner
in the review. Looking at `drop` in `DataFrameNAFunctions` it seems the
promised behaviour is
> Returns a new `DataFrame` that drops rows containing any null or NaN
values.
This means that if there is `DataSet` with a column with a lot of nulls it
could impact the counts for the other columns, or even result in low-count
entries in other columns not being included in the index. I don't think we can
use this to clean the dataset and you will need to handle nulls further down
unless I'm missing something.
(e.g consider:
`Row(a, null, b)
Row(c, null , c)`
If we made a string indexer on this it would give incorrect results.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]