Is there any improvement if you set a bigger memory for executors?

-----Original Message-----
From: va...@percona.com [mailto:va...@percona.com] On Behalf Of Vadim Tkachenko
Sent: Wednesday, December 30, 2015 9:51 AM
To: Cheng, Hao
Cc: user@spark.apache.org
Subject: Re: Problem with WINDOW functions?

Hi,

I am getting the same error with write.parquet("/path/to/file") :
 WARN HeartbeatReceiver: Removing executor 0 with no recent
heartbeats: 160714 ms exceeds timeout 120000 ms
15/12/30 01:49:05 ERROR TaskSchedulerImpl: Lost executor 0 on
10.10.7.167: Executor heartbeat timed out after 160714 ms


On Tue, Dec 29, 2015 at 5:35 PM, Cheng, Hao <hao.ch...@intel.com> wrote:
> Can you try to write the result into another file instead? Let's see if there 
> any issue in the executors side .
>
> sqlContext.sql("SELECT day,page,dense_rank() OVER (PARTITION BY day 
> ORDER BY pageviews DESC) as rank FROM d1").filter("rank <=
> 20").sort($"day",$"rank").write.parquet("/path/to/file")
>
> -----Original Message-----
> From: vadimtk [mailto:apache...@gmail.com]
> Sent: Wednesday, December 30, 2015 9:29 AM
> To: user@spark.apache.org
> Subject: Problem with WINDOW functions?
>
> Hi,
>
> I can't successfully execute a query with WINDOW function.
>
> The statements are following:
>
> val orcFile =
> sqlContext.read.parquet("/data/flash/spark/dat14sn").filter("upper(pro
> ject)='EN'")
> orcFile.registerTempTable("d1")
>  sqlContext.sql("SELECT day,page,dense_rank() OVER (PARTITION BY day 
> ORDER BY pageviews DESC) as rank FROM d1").filter("rank <=
> 20").sort($"day",$"rank").collect().foreach(println)
>
> with default
> spark.driver.memory
>
> I am getting java.lang.OutOfMemoryError: Java heap space.
> The same if I set spark.driver.memory=10g.
>
> When I set spark.driver.memory=45g (the box has 256GB of RAM) the execution 
> fails with a different error:
>
> 15/12/29 23:03:19 WARN HeartbeatReceiver: Removing executor 0 with no 
> recent
> heartbeats: 129324 ms exceeds timeout 120000 ms
>
> And I see that GC takes a lot of time.
>
> What is a proper way to execute statements above?
>
> I see the similar problems reported
> http://stackoverflow.com/questions/32196859/org-apache-spark-shuffle-f
> etchfailedexception 
> http://stackoverflow.com/questions/32544478/spark-memory-settings-for-
> count-action-in-a-big-table
>
>
>
>
>
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Problem-with-WINDO
> W-functions-tp25833.html Sent from the Apache Spark User List mailing 
> list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For 
> additional commands, e-mail: user-h...@spark.apache.org
>

Reply via email to