[ https://issues.apache.org/jira/browse/SPARK-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278502#comment-14278502 ]
Takumi Yoshida commented on SPARK-5243: --------------------------------------- Hi! I found, Spark hangs with following situation. i guess there would be some other condition. > 1. the cluster has only one worker. yes, running standalone. > 2. driver memory + executor memory > worker memory I use following settings, but it hang. driver memory = 1g executor memory = 1g worker memory = 3g > 3. deploy-mode = cluster no, deploy-mode was "client" as default. I use follwing code. https://gist.github.com/yoshi0309/33bd912d91c0bb5cdf30 command. ./bin/spark-submit ./ldgourmetALS.py s3n://abc-takumiyoshida/datasets/ --driver-memory 1g machine. Amazon EC2 / m3.medium (3ECU and 3.75GB RAM) > Spark will hang if (driver memory + executor memory) exceeds limit on a > 1-worker cluster > ---------------------------------------------------------------------------------------- > > Key: SPARK-5243 > URL: https://issues.apache.org/jira/browse/SPARK-5243 > Project: Spark > Issue Type: Improvement > Components: Deploy > Affects Versions: 1.2.0 > Environment: centos, others should be similar > Reporter: yuhao yang > Priority: Minor > > Spark will hang if calling spark-submit under the conditions: > 1. the cluster has only one worker. > 2. driver memory + executor memory > worker memory > 3. deploy-mode = cluster > This usually happens during development for beginners. > There should be some exit mechanism or at least a warning message in the > output of the spark-submit. > I am preparing PR for the case. And I would like to know your opinions about > if a fix is needed and better fix options. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org