[
https://issues.apache.org/jira/browse/SPARK-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033335#comment-15033335
]
Shivaram Venkataraman commented on SPARK-6830:
----------------------------------------------
I think the right place to do this is not in SparkR, but in the SQL query
planner where we will have more information about whether the plan is
deterministic and / or whether the DataFrame is cached etc.
cc [~rxin] [~davies]
> Memoize frequently queried vals in RDD, such as numPartitions, count etc.
> -------------------------------------------------------------------------
>
> Key: SPARK-6830
> URL: https://issues.apache.org/jira/browse/SPARK-6830
> Project: Spark
> Issue Type: Improvement
> Components: SparkR
> Reporter: Shivaram Venkataraman
> Priority: Minor
> Labels: Starter
>
> We should memoize frequently queried vals in RDD, such as numPartitions,
> count etc.
> While using SparkR in RStudio, the `count` function seems to be called
> frequently by the IDE – I think this is to show some stats about variables in
> the workspace etc. but this is not great in SparkR as we trigger a job every
> time count is called.
> Memoization would help in this case, but we should also see if there is some
> better way to interact with RStudio.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]