rom backend and
>> process and writes it to a location*]
>>
>> takes around 2 hour to complete.
>>
>>
>>
>> What I understood is, as the default value of
>> spark.dynamicAllocation.schedulerBacklogTimeout is 1 sec, so executors will
>> sca
rds
>
>
>
>
>
> *From:* Attila Zsolt Piros
> *Sent:* Friday, April 9, 2021 11:11 AM
> *To:* Ranju Jain
> *Cc:* user@spark.apache.org
> *Subject:* Re: Dynamic Allocation Backlog Property in Spark on Kubernetes
>
>
>
> You should not set "spark.dynamicAllocatio
Piros
*Sent:* Friday, April 9, 2021 11:11 AM
*To:* Ranju Jain
*Cc:* user@spark.apache.org
*Subject:* Re: Dynamic Allocation Backlog Property in Spark on Kubernetes
You should not set "spark.dynamicAllocation.schedulerBacklogTimeout" so
high and the purpose of this config is very differen
>
>
>
> Regards
>
> Ranju
>
>
>
>
>
> *From:* Attila Zsolt Piros
> *Sent:* Friday, April 9, 2021 12:13 AM
> *To:* Ranju Jain
> *Cc:* user@spark.apache.org
> *Subject:* Re: Dynamic Allocation Backlog Property in Spark on Kubernetes
>
>
>
>
: Ranju Jain
Cc: user@spark.apache.org
Subject: Re: Dynamic Allocation Backlog Property in Spark on Kubernetes
Hi!
For dynamic allocation you do not need to run the Spark jobs in parallel.
Dynamic allocation simply means Spark scales up by requesting more executors
when there are pending tasks
Hi!
For dynamic allocation you do not need to run the Spark jobs in parallel.
Dynamic allocation simply means Spark scales up by requesting more
executors when there are pending tasks (which kinda related to the
available partitions) and scales down when the executor is idle (as within
one job the
Hi All,
I have set dynamic allocation enabled while running spark on Kubernetes . But
new executors are requested if pending tasks are backlogged for more than
configured duration in property
"spark.dynamicAllocation.schedulerBacklogTimeout".
My Use Case is:
There are number of parallel jobs