[ 
https://issues.apache.org/jira/browse/SPARK-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-5112:
-----------------------------
    Fix Version/s:     (was: 1.5.0)

> Expose SizeEstimator as a developer API
> ---------------------------------------
>
>                 Key: SPARK-5112
>                 URL: https://issues.apache.org/jira/browse/SPARK-5112
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Sandy Ryza
>            Assignee: Sandy Ryza
>            Priority: Minor
>             Fix For: 1.4.0
>
>
> "The best way to size the amount of memory consumption your dataset will 
> require is to create an RDD, put it into cache, and look at the SparkContext 
> logs on your driver program. The logs will tell you how much memory each 
> partition is consuming, which you can aggregate to get the total size of the 
> RDD."
> -the Tuning Spark page
> This is a pain.  It would be much nicer to expose simply functionality for 
> understanding the memory footprint of a Java object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to