Dear community, When I am calling the `sklearn.cluster.DBSCAN` function, I found it may result in huge memory costs... I am trying to reduce the computation cost by having my input data type as np.float16 and using "precomputed" as my metric. But I found that it still uses float64 (as it returns me with some errors like float64 computation leads to memory allocation failure) during computation when `fit_predict` is called. All suggestions for reducing computation costs are highly appreciated. Thanks.
All the best, -- Mingzhe HU Columbia University in the City of New York M.S. in Electrical Engineering mingzhe...@columbia.edu <mh4...@columbia.edu>
_______________________________________________ scikit-learn mailing list scikit-learn@python.org https://mail.python.org/mailman/listinfo/scikit-learn