Sure folks, will try later today!
Best Regards
Ankit Khettry
On Sat, 7 Sep, 2019, 6:56 PM Sunil Kalra, wrote:
> Ankit
>
> Can you try reducing number of cores or increasing memory. Because with
> below configuration your each core is getting ~3.5 GB. Otherwise your data
> is s
Thanks Chris
Going to try it soon by setting maybe spark.sql.shuffle.partitions to 2001.
Also, I was wondering if it would help if I repartition the data by the
fields I am using in group by and window operations?
Best Regards
Ankit Khettry
On Sat, 7 Sep, 2019, 1:05 PM Chris Teoh, wrote:
>
Nope, it's a batch job.
Best Regards
Ankit Khettry
On Sat, 7 Sep, 2019, 6:52 AM Upasana Sharma, <028upasana...@gmail.com>
wrote:
> Is it a streaming job?
>
> On Sat, Sep 7, 2019, 5:04 AM Ankit Khettry
> wrote:
>
>> I have a Spark job that consists of a large nu
them are even marked resolved.
Can someone guide me as to how to approach this problem? I am using
Databricks Spark 2.4.1.
Best Regards
Ankit Khettry
aster node resources.
Try running the job in yarn mode and if the issue persists, try increasing
the disc volumes.
Best Regards
Ankit Khettry
On Wed, 17 Apr, 2019, 9:44 AM Balakumar iyer S,
wrote:
> Hi ,
>
>
> While running the following spark code in the cluster with following
>