Github user oliverpierson commented on a diff in the pull request:
https://github.com/apache/spark/pull/13176#discussion_r64050222
--- Diff: docs/ml-features.md ---
@@ -1064,7 +1069,8 @@ categorical features.
The bin ranges are chosen by taking a sample of the data and dividing it
into roughly equal parts.
The lower and upper bin bounds will be `-Infinity` and `+Infinity`,
covering all real values.
This attempts to find `numBuckets` partitions based on a sample of the
given input data, but it may
-find fewer depending on the data sample values.
+find fewer depending on the data sample values. Relative precision of the
approxQuantile is set using
--- End diff --
Actually, I think the
[description](https://github.com/oliverpierson/spark/blob/a5ccc0ecbd6f3960351027c90e0c9221aa15f2db/mllib/src/main/scala/org/apache/spark/ml/feature/QuantileDiscretizer.scala#L71)
in `QuantileDiscretizer.scala` is misleading and that's my fault. Any
sampling that is performed is done by `approxQuantile` in DataFrame stats. So
it probably would be best if we just say something like "the bin ranges are
chosen using `DataFrame.stats.approxQuantile`...". Alternatively, we could say
"The bin ranges are chosen using the Greenwald-Khanna algorithm..." since that
the `approxQuantiles` uses.
Also, in the this latest implementation of `QuantileDiscretizer`, the
number of buckets found will always be = `numBuckets`. I probably should
submit a new PR with updated documentation/description in
`QuantileDiscretizer.scala`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]