Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/560#issuecomment-46592132
HI @tgravescs,
When I tested I used a non `spark.` property and it worked. But I just want
to make sure we're on the same page as what do you think SPARK_JAVA_OPTS should
do. Here's some code I have which is called both by the driver and executors:
val silly1 = System.getProperty("silly.property")
val silly2 = System.getProperty("spark.silly.property")
logInfo(s"silly.property = $silly1, spark.silly.property = $silly2")
if (silly1 == null && silly2 == null) {
throw new IllegalArgumentException("silly.property nor
spark.silly.property is set!")
}
I then run this (against the master branch) in yarn-cluster mode:
SPARK_JAVA_OPTS='-Dspark.silly.property=bar -Dsilly.property=foo'
./bin/spark-submit ...
The driver launches but executors die with:
14/06/19 10:31:19 INFO SillyDepSleeper: silly.property = null,
spark.silly.property = null
14/06/19 10:31:19 ERROR Executor: Exception in task ID 10
java.lang.IllegalArgumentException: silly.property nor
spark.silly.property is set!
at
com.cloudera.ss.SillyDepSleeper$.checkProps(SillyDepSleeper.scala:32)
So I'm not sure SPARK_JAVA_OPTS is really working on master at all. Another
thing I tried is to use just ` SPARK_JAVA_OPTS='-Dsilly.property=foo' `, and in
that case the driver doesn't start either (meaning non-"spark." properties are
not propagated).
I can look at fixing the problem you mention, but it looks like
SPARK_JAVA_OPTS has some questionable behaviour even without my changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---