[jira] [Created] (SPARK-12943) spark should distribute truststore if used in yarn-cluster mode

2016-01-20 Thread John Vines (JIRA)
John Vines created SPARK-12943:
--

 Summary: spark should distribute truststore if used in 
yarn-cluster mode
 Key: SPARK-12943
 URL: https://issues.apache.org/jira/browse/SPARK-12943
 Project: Spark
  Issue Type: Improvement
  Components: YARN
Affects Versions: 1.5.2
Reporter: John Vines


spark.ssl.trustStore is used to specify the truststore to use for SSL. It is 
described somewhat at 
http://spark.apache.org/docs/1.5.2/configuration.html#security

What it does not specify is that, when used in yarn environments, it must be an 
absolute path available on all nodes.

In an ideal world, it would recognize it's in yarn-cluster mode, put that file 
in the distributed cache, and then update the local conf in each 
driver/executor to the local path to use they're local version.

Or at the very least, the docs could be clearer that it's incompatible with 
yarn-cluster mode unless you give it a path that's available on all nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-19 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107061#comment-15107061
 ] 

John Vines commented on SPARK-12650:


SPARK_SUBMIT_OPTS seems to work. -Xmx256m changed the heap settings for 
SparkSubmitJob, but left the driver alone and did not appear to cause the same 
conflict in the executors as mentioned above. I also did not see any logging 
about that setting (unlike SPARK_JAVA_OPTS which I mentioned above).

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-10910) spark.{executor,driver}.userClassPathFirst don't work for kryo (probably others)

2016-01-12 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15095177#comment-15095177
 ] 

John Vines commented on SPARK-10910:


snappy-java is another one

> spark.{executor,driver}.userClassPathFirst don't work for kryo (probably 
> others)
> 
>
> Key: SPARK-10910
> URL: https://issues.apache.org/jira/browse/SPARK-10910
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, YARN
>Affects Versions: 1.5.1
>Reporter: Thomas Graves
>
> Trying to use spark.{executor,driver}.userClassPathFirst to put a newer 
> version of kryo in doesn't work.   Note I was running on YARN.
> There is a bug in kryo 1.21 that spark is using which is fixed in kryo 1.24.  
> A customer tried to use the spark.{executor,driver}.userClassPathFirst to 
> include the newer version of kryo but it threw the following exception:
> 15/09/29 21:36:43 ERROR yarn.ApplicationMaster: User class threw exception: 
> java.lang.LinkageError: loader constraint violation: loader (instance of 
> org/apache/spark/util/ChildFirstURLClassLoader) previously initiated loading 
> for a different type with name "com/esotericsoftware/kryo/Kryo"
> java.lang.LinkageError: loader constraint violation: loader (instance of 
> org/apache/spark/util/ChildFirstURLClassLoader) previously initiated loading 
> for a different type with name "com/esotericsoftware/kryo/Kryo"
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
> The issue here is that the Spark Driver instantiates a kryo class in SparkEnv:
>  val serializer = instantiateClassFromConf[Serializer](
>   "spark.serializer", "org.apache.spark.serializer.JavaSerializer")
> logDebug(s"Using serializer: ${serializer.getClass}")
> It uses whatever version is in the spark assembly jar.
> Then on YARN in the ApplicationMaster code before it starts the user 
> application is handles the user classpath first to be the 
> ChildFirstURLClassLoader, which is later used when kryo is needed. This tries 
> to load the newer version of kryo from the user jar and it throws the 
> exception above.
> I'm sure this could happen with any number of other classes that got loaded 
> by Spark before we try to run the user application code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-12 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15094189#comment-15094189
 ] 

John Vines edited comment on SPARK-12650 at 1/12/16 10:52 PM:
--

So I ran it and got this message
{code}SPARK_JAVA_OPTS was detected (set to '-Xmx512M').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an 
application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or 
worker)

{code}

but that was just a warning (so small complaint if this is the proper 
solution), but it did properly cap the vmem use.

EDIT: it appears this value applies to ALL processes though. So when I did 512 
it was fine, but when I did 256 exectutors failed since I have their mem set to 
512 so I got the containers failing to start due to `Incompatible minimum and 
maximum heap sizes specified`


was (Author: vines):
So I ran it and got this message
{code}SPARK_JAVA_OPTS was detected (set to '-Xmx512M').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an 
application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or 
worker)

{code}

but that was just a warning (so small complaint if this is the proper 
solution), but it did properly cap the vmem use.

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-12 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15094189#comment-15094189
 ] 

John Vines commented on SPARK-12650:


So I ran it and got this message
{code}SPARK_JAVA_OPTS was detected (set to '-Xmx512M').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an 
application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or 
worker)

{code}

but that was just a warning (so small complaint if this is the proper 
solution), but it did properly cap the vmem use.

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-08 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089673#comment-15089673
 ] 

John Vines commented on SPARK-12650:


How would I set those through the java SparkLauncher?

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-06 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15086167#comment-15086167
 ] 

John Vines commented on SPARK-12650:


I do care about knowing when the spark job is finished. Unless there's another 
way to track a the spark job, I need to wait for it to complete

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-06 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15086138#comment-15086138
 ] 

John Vines commented on SPARK-12650:


I'm launching the spark job from inside an App Master, as I said in my 
background.

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-06 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15086135#comment-15086135
 ] 

John Vines commented on SPARK-12650:


{code}[root@datanode1-systemtest-john-1 /]# java -XX:+PrintFlagsFinal -version 
| grep HeapSize
uintx ErgoHeapSizeLimit = 0   {product} 
  
uintx HeapSizePerGCThread   = 87241520{product} 
  
uintx InitialHeapSize  := 525375744   {product} 
  
uintx LargePageHeapSizeThreshold= 134217728   {product} 
  
uintx MaxHeapSize  := 8407482368  {product} 
  
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
{code}

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-06 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15086133#comment-15086133
 ] 

John Vines commented on SPARK-12650:


In the test example I was using, I set driver and executor to 512MB. When I 
disabled vmem checking, they seemed to be running with the appropriate memory 
settings. Getting the actual commind line executed it's a bit of a PITA, but I 
can get it if it's actually needed.

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-06 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085747#comment-15085747
 ] 

John Vines commented on SPARK-12650:


No, neither Xmx nor Xms are set. This has to do with java 7 and it's default 
heap allocation (7 got very agressive vs. 6 which was relatively sane) in my 
experience.

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-06 Thread John Vines (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085722#comment-15085722
 ] 

John Vines commented on SPARK-12650:


I'm referring to the client which is launching the driver in yarn-cluster mode. 
Without the SparkSubmit jvm, which is operating as the client, specifying Xmx, 
it's taking ~8GB of vmem on my machine, which causes yarn to kill the whole 
container.

This is in spark 1.5.2.

> No means to specify Xmx settings for SparkSubmit in yarn-cluster mode
> -
>
> Key: SPARK-12650
> URL: https://issues.apache.org/jira/browse/SPARK-12650
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.5.2
> Environment: Hadoop 2.6.0
>Reporter: John Vines
>
> Background-
> I have an app master designed to do some work and then launch a spark job.
> Issue-
> If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, 
> leading to the jvm taking a default heap which is relatively large. This 
> causes a large amount of vmem to be taken, so that it is killed by yarn. This 
> can be worked around by disabling Yarn's vmem check, but that is a hack.
> If I run it in yarn-client mode, it's fine as long as my container has enough 
> space for the driver, which is manageable. But I feel that the utter lack of 
> Xmx settings for what I believe is a very small jvm is a problem.
> I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-12650) No means to specify Xmx settings for SparkSubmit in yarn-cluster mode

2016-01-05 Thread John Vines (JIRA)
John Vines created SPARK-12650:
--

 Summary: No means to specify Xmx settings for SparkSubmit in 
yarn-cluster mode
 Key: SPARK-12650
 URL: https://issues.apache.org/jira/browse/SPARK-12650
 Project: Spark
  Issue Type: Bug
Affects Versions: 1.5.2
 Environment: Hadoop 2.6.0
Reporter: John Vines


Background-
I have an app master designed to do some work and then launch a spark job.

Issue-
If I use yarn-cluster, then the SparkSubmit does not Xmx itself at all, leading 
to the jvm taking a default heap which is relatively large. This causes a large 
amount of vmem to be taken, so that it is killed by yarn. This can be worked 
around by disabling Yarn's vmem check, but that is a hack.

If I run it in yarn-client mode, it's fine as long as my container has enough 
space for the driver, which is manageable. But I feel that the utter lack of 
Xmx settings for what I believe is a very small jvm is a problem.

I believe this was introduced with the fix for SPARK-3884



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org