[ https://issues.apache.org/jira/browse/SPARK-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14724701#comment-14724701 ]
Apache Spark commented on SPARK-10329: -------------------------------------- User 'HuJiayin' has created a pull request for this issue: https://github.com/apache/spark/pull/8546 > Cost RDD in k-means|| initialization is not storage-efficient > ------------------------------------------------------------- > > Key: SPARK-10329 > URL: https://issues.apache.org/jira/browse/SPARK-10329 > Project: Spark > Issue Type: Improvement > Components: MLlib > Affects Versions: 1.3.1, 1.4.1, 1.5.0 > Reporter: Xiangrui Meng > Assignee: hujiayin > Labels: clustering > > Currently we use `RDD[Vector]` to store point cost during k-means|| > initialization, where each `Vector` has size `runs`. This is not > storage-efficient because `runs` is usually 1 and then each record is a > Vector of size 1. What we need is just the 8 bytes to store the cost, but we > introduce two objects (DenseVector and its values array), which could cost 16 > bytes. That is 200% overhead. Thanks [~Grace Huang] and Jiayin Hu from Intel > for reporting this issue! > There are several solutions: > 1. Use `RDD[Array[Double]]` instead of `RDD[Vector]`, which saves 8 bytes per > record. > 2. Use `RDD[Array[Double]]` but batch the values for storage, e.g. each > `Array[Double]` object covers 1024 instances, which could remove most of the > overhead. > Besides, using MEMORY_AND_DISK instead of MEMORY_ONLY could prevent cost RDDs > kicking out the training dataset from memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org