Hi ,

I am in weird situation where the spark job  behaves abnormal. sometimes it
runs very fast and some times it takes long time to complete. Not sure what
is going on. Cluster is free most of the times.

Below image shows the shuffle read is still taking more than 3 hours to
write data back into hive table using sql.context("Insert overwrirte table
hivetable from select * from spark_temporary_table")

[image: Inline image 1]

could anyone let me know how to resolve this and run the job faster.

Thanks,
Asmath

Reply via email to