hotdog created SPARK-11101:
------------------------------

             Summary: pipe() operation OOM
                 Key: SPARK-11101
                 URL: https://issues.apache.org/jira/browse/SPARK-11101
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 1.4.1
         Environment: spark on yarn
            Reporter: hotdog


when using pipe() operation with large data(10TB), the pipe() operation always 
OOM. 
my parameters:
executor-memory 16g
executor-cores 4
num-executors 400
"spark.yarn.executor.memoryOverhead", "8192"
partition number: 60000

does pipe() operation use many off-heap memory? 
the log is :
killed by YARN for exceeding memory limits. 24.4 GB of 24 GB physical memory 
used. Consider boosting spark.yarn.executor.memoryOverhead.

should I continue boosting spark.yarn.executor.memoryOverhead? Or there are 
some bugs in the pipe() operation?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to