For posterity, I found the root cause and filed a JIRA:
https://issues.apache.org/jira/browse/SPARK-21960. I plan to open a pull
request with the minor fix.
From: Karthik Palaniappan
Sent: Friday, September 1, 2017 9:49 AM
To: Akhil Das
Cc: user@spark.apache.org;
Any ideas @Tathagata? I'd be happy to contribute a patch if you can point me in
the right direction.
From: Karthik Palaniappan
Sent: Friday, August 25, 2017 9:15 AM
To: Akhil Das
Cc: user@spark.apache.org; t...@databricks.com
Subject: RE:
You have to set spark.executor.instances=0 in a streaming application with
dynamic allocation:
https://github.com/tdas/spark/blob/master/streaming/src/main/scala/org/apache/spark/streaming/scheduler/ExecutorAllocationManager.scala#L207.
I originally had it set to a positive value, and
Have you tried setting spark.executor.instances=0 to a positive non-zero
value? Also, since its a streaming application set executor cores > 1.
On Wed, Aug 23, 2017 at 3:38 AM, Karthik Palaniappan wrote:
> I ran the HdfsWordCount example using this command:
>
>
I ran the HdfsWordCount example using this command:
spark-submit run-example \
--conf spark.streaming.dynamicAllocation.enabled=true \
--conf spark.executor.instances=0 \
--conf spark.dynamicAllocation.enabled=false \
--conf spark.master=yarn \
--conf spark.submit.deployMode=client \