[ 
https://issues.apache.org/jira/browse/SPARK-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14595034#comment-14595034
 ] 

Bogdan Ghit commented on SPARK-6112:
------------------------------------

This is my configuration:

1. My tmpfs is mounted on /dev/shm.
2. dfs.datanode.data.dir = /local/bghit/myhdfs,[RAM_DISK]/dev/shm/ramdisk
3. dfs.datanode.max.locked.memory=1000000000

The amount of memory I can lock is set in /etc/security/limits.conf to 
unlimited, so ulimit -l outputs "unlimited". However, I get the exception 
"Cannot start datanode because the configured max locked memory size 
(dfs.datanode.max.locked.memory) is greater than zero and native code is not 
available." Any ideas why?

Regarding my previous comment, the documentation still has offHeap instead of 
externalBlock.







> Provide external block store support through HDFS RAM_DISK
> ----------------------------------------------------------
>
>                 Key: SPARK-6112
>                 URL: https://issues.apache.org/jira/browse/SPARK-6112
>             Project: Spark
>          Issue Type: New Feature
>          Components: Block Manager
>            Reporter: Zhan Zhang
>         Attachments: SparkOffheapsupportbyHDFS.pdf
>
>
> HDFS Lazy_Persist policy provide possibility to cache the RDD off_heap in 
> hdfs. We may want to provide similar capacity to Tachyon by leveraging hdfs 
> RAM_DISK feature, if the user environment does not have tachyon deployed. 
> With this feature, it potentially provides possibility to share RDD in memory 
> across different jobs and even share with jobs other than spark, and avoid 
> the RDD recomputation if executors crash. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to