Please show the write() call, and the results in HDFS. What are all the
files you see?
On Fri, Aug 11, 2017 at 1:10 PM, KhajaAsmath Mohammed <
mdkhajaasm...@gmail.com> wrote:
> tempTable = union_df.registerTempTable("tempRaw")
>
> create = hc.sql('CREATE TABLE IF NOT EXISTS blab.pyspark_dpprq
We have had issues with gathering status on long running jobs. We have
attempted to draw parallels between the Spark UI/Monitoring API and our
code base. Due to the separation between code and the execution plan, even
having a guess as to where we are in the process is difficult. The
We have had issues with gathering status on long running jobs. We have
attempted to draw parallels between the Spark UI/Monitoring API and our
code base. Due to the separation between code and the execution plan, even
having a guess as to where we are in the process is difficult. The
Did you also increase the size of the heap of the Java app that is starting
Spark?
https://alvinalexander.com/blog/post/java/java-xmx-xms-memory-heap-size-control
On Thu, Sep 7, 2017 at 12:16 PM, Imran Rajjad wrote:
> I am getting Out of Memory error while running