Re: sqlContext.cacheTable + yarn client mode

2016-03-30 Thread Jeff Zhang
The table data is cached in block managers on executors.  Could you paste
the log on your driver about OOM ?

On Thu, Mar 31, 2016 at 1:24 PM, Soam Acharya  wrote:

> Hi folks,
>
> I understand that invoking sqlContext.cacheTable("tableName") will load
> the table into a compressed in-memory columnar format. When Spark is
> launched via spark shell in YARN client mode, is the table loaded into the
> local Spark driver process in addition to the executors in the Hadoop
> cluster or is it just loaded into the executors? We're exploring an OOM
> issue on the local Spark driver for some SQL code and was wondering if the
> local cache load could be the culprit.
>
> Appreciate any thoughts. BTW, we're running Spark 1.6.0 on this particular
> cluster.
>
> Regards,
>
> Soam
>



-- 
Best Regards

Jeff Zhang


sqlContext.cacheTable + yarn client mode

2016-03-30 Thread Soam Acharya
Hi folks,

I understand that invoking sqlContext.cacheTable("tableName") will load the
table into a compressed in-memory columnar format. When Spark is launched
via spark shell in YARN client mode, is the table loaded into the local
Spark driver process in addition to the executors in the Hadoop cluster or
is it just loaded into the executors? We're exploring an OOM issue on the
local Spark driver for some SQL code and was wondering if the local cache
load could be the culprit.

Appreciate any thoughts. BTW, we're running Spark 1.6.0 on this particular
cluster.

Regards,

Soam