Hi Akash,
Glad to know that repartition helped!
The overall tasks actually depends on the kind of operations you are
performing and also on how the DF is partitioned.
I can't comment on the former but can provide some pointers on the latter.
Default value of spark.sql.shuffle.partitions is 200.
Hi Srinath,
Thanks for such an elaborate reply. How to reduce the number of overall
tasks?
I found, after simply repartitioning the csv file into 8 parts and
converting it to parquet with snappy compression, helped not only in even
distribution of the tasks on all nodes, but also helped in bringi
Hi Aakash,
Can you check the logs for Executor ID 0? It was restarted on worker
192.168.49.39 perhaps due to OOM or something.
Also observed that the number of tasks are high and unevenly distributed
across the workers.
Check if there are too many partitions in the RDD and tune it using
spark.sql
Yes, but when I did increase my executor memory, the spark job is going to
halt after running a few steps, even though, the executor isn't dying.
Data - 60,000 data-points, 230 columns (60 MB data).
Any input on why it behaves like that?
On Tue, Jun 12, 2018 at 8:15 AM, Vamshi Talla wrote:
> A
Aakash,
Like Jorn suggested, did you increase your test data set? If so, did you also
update your executor-memory setting? It seems like you might exceeding the
executor memory threshold.
Thanks
Vamshi Talla
Sent from my iPhone
On Jun 11, 2018, at 8:54 AM, Aakash Basu
mailto:aakash.spark
Hi Jorn/Others,
Thanks for your help. Now, data is being distributed in a proper way, but
the challenge is, after a certain point, I'm getting this error, after
which, everything stops moving ahead -
2018-06-11 18:14:56 ERROR TaskSchedulerImpl:70 - Lost executor 0 on
192.168.49.39: Remote RPC cli
If it is in kB then spark will always schedule it to one node. As soon as it
gets bigger you will see usage of more nodes.
Hence increase your testing Dataset .
> On 11. Jun 2018, at 12:22, Aakash Basu wrote:
>
> Jorn - The code is a series of feature engineering and model tuning
> operations
try
--num-executors 3 --executor-cores 4 --executor-memory 2G --conf
spark.scheduler.mode=FAIR
On Mon, Jun 11, 2018 at 2:43 PM, Aakash Basu
wrote:
> Hi,
>
> I have submitted a job on* 4 node cluster*, where I see, most of the
> operations happening at one of the worker nodes and other two are s
What is your code ? Maybe this one does an operation which is bound to a single
host or your data volume is too small for multiple hosts.
> On 11. Jun 2018, at 11:13, Aakash Basu wrote:
>
> Hi,
>
> I have submitted a job on 4 node cluster, where I see, most of the operations
> happening at on