Github user imatiach-msft commented on the issue:
https://github.com/apache/spark/pull/16355
the only problem I see is that with this code we generate k-1 clusters
instead of k, but it states in the algorithm documentation that it is not
guaranteed to generate k clusters, it could be fewer if the leaf clusters are
not divisible (see
spark/mllib/src/main/scala/org/apache/spark/mllib/clustering/BisectingKMeans.scala):
**_Iteratively it finds divisible clusters on the bottom level and bisects
each of them using
k-means, until there are `k` leaf clusters in total or no leaf clusters
are divisible._**
It seems in the dataset Alok gave, one of the clusters which was assumed to
be divisible and was divided ended up generating two clusters, one which
contained all the points and the other none, which is what created the error
(his cluster 162, child of 81, was empty, but cluster 163 was non-empty after
reassignment).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]