zhaP524 created SPARK-21776:
-------------------------------

             Summary: How to use the memory-mapped file on Spark???
                 Key: SPARK-21776
                 URL: https://issues.apache.org/jira/browse/SPARK-21776
             Project: Spark
          Issue Type: Question
          Components: Block Manager, Documentation, Input/Output
    Affects Versions: 2.1.1
         Environment: Spark 2.1.1 
Scala 2.11.8
            Reporter: zhaP524


      In generation, we have to use the Spark full quantity loaded HBase table 
based on one dimension table to generate business, because the base table is 
total quantity loaded, the memory will pressure is very big, I want to see if 
the Spark can use this way to deal with memory mapped file?Is there such a 
mechanism?How do you use it?
      And I found in the Spark a parameter: 
spark.storage.memoryMapThreshold=2m, is not very clear what this parameter is 
used for?
       There is a putBytes and getBytes method in DiskStore.scala with Spark 
source code, is this the memory-mapped file mentioned above?How to understand??
       Let me know if you have any trouble..

Wish to You!!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to