Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18513#discussion_r129748089
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/FeatureHasher.scala ---
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.ml.feature
+
+import org.apache.spark.annotation.{Experimental, Since}
+import org.apache.spark.ml.Transformer
+import org.apache.spark.ml.attribute.AttributeGroup
+import org.apache.spark.ml.linalg.Vectors
+import org.apache.spark.ml.param.{IntParam, ParamMap, ParamValidators}
+import org.apache.spark.ml.param.shared.{HasInputCols, HasOutputCol}
+import org.apache.spark.ml.util.{DefaultParamsReadable,
DefaultParamsWritable, Identifiable, SchemaUtils}
+import org.apache.spark.mllib.feature.{HashingTF => OldHashingTF}
+import org.apache.spark.sql.{DataFrame, Dataset, Row}
+import org.apache.spark.sql.functions._
+import org.apache.spark.sql.types._
+import org.apache.spark.util.Utils
+import org.apache.spark.util.collection.OpenHashMap
+
+/**
+ * Feature hashing projects a set of categorical or numerical features
into a feature vector of
+ * specified dimension (typically substantially smaller than that of the
original feature
+ * space). This is done using the hashing trick
(https://en.wikipedia.org/wiki/Feature_hashing)
+ * to map features to indices in the feature vector.
+ *
+ * The [[FeatureHasher]] transformer operates on multiple columns. Each
column may contain either
+ * numeric or categorical features. Behavior and handling of column data
types is as follows:
+ * -Numeric columns: For numeric features, the hash value of the column
name is used to map the
+ * feature value to its index in the feature vector.
Numeric features are never
+ * treated as categorical, even when they are integers.
You must explicitly
+ * convert numeric columns containing categorical
features to strings first.
+ * -String columns: For categorical features, the hash value of the
string "column_name=value"
+ * is used to map to the vector index, with an indicator
value of `1.0`.
+ * Thus, categorical features are "one-hot" encoded
+ * (similarly to using [[OneHotEncoder]] with
`dropLast=false`).
+ * -Boolean columns: Boolean values are treated in the same way as string
columns. That is,
+ * boolean features are represented as
"column_name=true" or "column_name=false",
+ * with an indicator value of `1.0`.
+ *
+ * Null (missing) values are ignored (implicitly zero in the resulting
feature vector).
+ *
+ * Since a simple modulo is used to transform the hash function to a
vector index,
+ * it is advisable to use a power of two as the numFeatures parameter;
+ * otherwise the features will not be mapped evenly to the vector indices.
+ *
+ * {{{
+ * val df = Seq(
+ * (2.0, true, "1", "foo"),
+ * (3.0, false, "2", "bar")
+ * ).toDF("real", "bool", "stringNum", "string")
+ *
+ * val hasher = new FeatureHasher()
+ * .setInputCols("real", "bool", "stringNum", "num")
+ * .setOutputCol("features")
+ *
+ * hasher.transform(df).show()
+ *
+ * +----+-----+---------+------+--------------------+
+ * |real| bool|stringNum|string| features|
+ * +----+-----+---------+------+--------------------+
+ * | 2.0| true| 1| foo|(262144,[51871,63...|
+ * | 3.0|false| 2| bar|(262144,[6031,806...|
+ * +----+-----+---------+------+--------------------+
+ * }}}
+ */
+@Experimental
+@Since("2.3.0")
+class FeatureHasher(@Since("2.3.0") override val uid: String) extends
Transformer
+ with HasInputCols with HasOutputCol with DefaultParamsWritable {
+
+ @Since("2.3.0")
+ def this() = this(Identifiable.randomUID("featureHasher"))
+
+ /**
+ * Number of features. Should be greater than 0.
+ * (default = 2^18^)
+ * @group param
+ */
+ @Since("2.3.0")
+ val numFeatures = new IntParam(this, "numFeatures", "number of features
(> 0)",
+ ParamValidators.gt(0))
+
+ setDefault(numFeatures -> (1 << 18))
+
+ /** @group getParam */
+ @Since("2.3.0")
+ def getNumFeatures: Int = $(numFeatures)
+
+ /** @group setParam */
+ @Since("2.3.0")
+ def setNumFeatures(value: Int): this.type = set(numFeatures, value)
+
+ /** @group setParam */
+ @Since("2.3.0")
+ def setInputCols(values: String*): this.type =
setInputCols(values.toArray)
+
+ /** @group setParam */
+ @Since("2.3.0")
+ def setInputCols(value: Array[String]): this.type = set(inputCols, value)
+
+ /** @group setParam */
+ @Since("2.3.0")
+ def setOutputCol(value: String): this.type = set(outputCol, value)
+
+ @Since("2.3.0")
+ override def transform(dataset: Dataset[_]): DataFrame = {
+ val hashFunc: Any => Int = OldHashingTF.murmur3Hash
+ val n = $(numFeatures)
+ val localInputCols = $(inputCols)
+
+ val outputSchema = transformSchema(dataset.schema)
+ val realFields = outputSchema.fields.filter { f =>
+ f.dataType.isInstanceOf[NumericType]
+ }.map(_.name).toSet
+
+ def getDouble(x: Any): Double = {
+ x match {
+ case n: java.lang.Number =>
+ n.doubleValue()
+ case other =>
+ // will throw ClassCastException if it cannot be cast, as would
row.getDouble
+ other.asInstanceOf[Double]
+ }
+ }
+
+ val hashFeatures = udf { row: Row =>
+ val map = new OpenHashMap[Int, Double]()
+ localInputCols.foreach { colName =>
+ val fieldIndex = row.fieldIndex(colName)
+ if (!row.isNullAt(fieldIndex)) {
+ val (rawIdx, value) = if (realFields(colName)) {
+ // numeric values are kept as is, with vector index based on
hash of "column_name"
+ val value = getDouble(row.get(fieldIndex))
+ val hash = hashFunc(colName)
+ (hash, value)
+ } else {
+ // string and boolean values are treated as categorical, with
an indicator value of 1.0
+ // and vector index based on hash of "column_name=value"
+ val value = row.get(fieldIndex).toString
+ val fieldName = s"$colName=$value"
+ val hash = hashFunc(fieldName)
+ (hash, 1.0)
+ }
+ val idx = Utils.nonNegativeMod(rawIdx, n)
+ map.changeValue(idx, value, v => v + value)
+ }
+ }
+ Vectors.sparse(n, map.toSeq)
+ }
+
+ val metadata = outputSchema($(outputCol)).metadata
+ dataset.select(
+ col("*"),
+ hashFeatures(struct($(inputCols).map(col): _*)).as($(outputCol),
metadata))
+ }
+
+ @Since("2.3.0")
+ override def copy(extra: ParamMap): FeatureHasher = defaultCopy(extra)
+
+ @Since("2.3.0")
+ override def transformSchema(schema: StructType): StructType = {
+ val fields = schema($(inputCols).toSet)
+ fields.foreach { fieldSchema =>
+ val dataType = fieldSchema.dataType
+ val fieldName = fieldSchema.name
+ require(dataType.isInstanceOf[NumericType] ||
+ dataType.isInstanceOf[StringType] ||
+ dataType.isInstanceOf[BooleanType],
+ s"FeatureHasher requires columns to be of NumericType, BooleanType
or StringType. " +
+ s"Column $fieldName was $dataType")
+ }
+ val attrGroup = new AttributeGroup($(outputCol), $(numFeatures))
--- End diff --
It seems that we didn't store ```Attributes``` in the ```AttributeGroup```,
but we did it in ```VectorAssembler```, and both of ```FeatureHasher``` and
```VectorAssembler``` can be followed with ML algorithms directly. I'd like to
confirm is it intentionalï¼I understand this may be due to performance
considerations, and users may not interested to know the attribute of hashed
features. We can leave as it is, until we find it affects some scenarios.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]