[
https://issues.apache.org/jira/browse/SPARK-21776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
zhaP524 updated SPARK-21776:
----------------------------
Remaining Estimate: 12h
Original Estimate: 12h
> How to use the memory-mapped file on Spark??
> --------------------------------------------
>
> Key: SPARK-21776
> URL: https://issues.apache.org/jira/browse/SPARK-21776
> Project: Spark
> Issue Type: Improvement
> Components: Block Manager, Documentation, Input/Output, Spark Core
> Affects Versions: 2.1.1
> Environment: Spark 2.1.1
> Scala 2.11.8
> Reporter: zhaP524
> Priority: Trivial
> Attachments: screenshot-1.png, screenshot-2.png
>
> Original Estimate: 12h
> Remaining Estimate: 12h
>
> In generation, we have to use the Spark full quantity loaded HBase
> table based on one dimension table to generate business, because the base
> table is total quantity loaded, the memory will pressure is very big, I want
> to see if the Spark can use this way to deal with memory mapped file?Is there
> such a mechanism?How do you use it?
> And I found in the Spark a parameter:
> spark.storage.memoryMapThreshold=2m, is not very clear what this parameter is
> used for?
> There is a putBytes and getBytes method in DiskStore.scala with Spark
> source code, is this the memory-mapped file mentioned above?How to understand?
> Let me know if you have any trouble..
> Wish to You!!
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]