[ 
https://issues.apache.org/jira/browse/SPARK-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15764191#comment-15764191
 ] 

Sean Owen commented on SPARK-18946:
-----------------------------------

I'm not sure what you're proposing as a fix though -- a big object is big, yes. 
It is already compressed. Does it cause a problem that is deeper than that?

> treeAggregate will be low effficiency when aggregate high dimension vectors 
> in ML algorithm
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18946
>                 URL: https://issues.apache.org/jira/browse/SPARK-18946
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, MLlib
>            Reporter: zunwen you
>              Labels: features
>
> In many machine learning algorithms, we have to treeAggregate large 
> vectors/arrays due to the large number of features. Unfortunately, the 
> treeAggregate operation of RDD will be low efficiency when the dimension of 
> vectors/arrays is bigger than million. Because high dimension of vector/array 
> always occupy more than 100MB Memory, transferring a 100MB element among 
> executors is pretty low efficiency in Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to