[ 
https://issues.apache.org/jira/browse/SPARK-17400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15465845#comment-15465845
 ] 

Frank Dai commented on SPARK-17400:
-----------------------------------

Currently I'm haunted by this issue in production, I have 6 millions users and 
564 thousands  items and the MinMaxScaler.transform() takes more than 22 hours 
for prediction, the ALS.fit() actually is fast.

My cluster has 70 nodes and each node has 16 cores and 32GB memory.

> MinMaxScaler.transform() outputs DenseVector by default, which causes poor 
> performance
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-17400
>                 URL: https://issues.apache.org/jira/browse/SPARK-17400
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, MLlib
>    Affects Versions: 1.6.1, 1.6.2, 2.0.0
>            Reporter: Frank Dai
>
> MinMaxScaler.transform() outputs DenseVector by default, which will cause 
> poor performance and consume a lot of memory.
> The most important line of code is the following:
> https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/feature/MinMaxScaler.scala#L195
> I suggest that the code should calculate the number of non-zero elements in 
> advance, if the number of non-zero elements is less than half of the total 
> elements in the matrix, use SparseVector, otherwise use DenseVector
> Or we can make it configurable by adding  a parameter to 
> MinMaxScaler.transform(), for example MinMaxScaler.transform(isDense: 
> Boolean), so that users can decide whether  their output result is dense or 
> sparse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to