Hi
When you deploy spark workers inside containers, the amount of memory
depends on three things:

   1. *Spark daemon memory*: Memory you give to spark daemon process.
   Usually 1G is enough. This needs to be passed as SPARK_DAEMON_MEMORY
   environment variable.
   2. *Spark worker memory*: Actual memory you give to the worker itself.
   This depends on your needs. This needs to be passed as SPARK_WORKER_MEMORY
   environment variable.
   3. *Free memory for OS*: Memory you give for OS related stuffs. From my
   experience from 2 to 4 GB is a good value.

Then the total amount of memory you should assign to your container would
be the sum of the previous values, for your case 1 GB for daemon + 8 GB for
worker + 2 GB (or 4) for OS = 11 GB.

This is for spark 2.1.1.

El jue., 1 nov. 2018 a las 4:52, zhankun tang (<tangzhan...@gmail.com>)
escribió:

> Hi Hesong,
> "8.0 GB of 8 GB physical memory used;"
> Seems memory shortage?
>
> Zhankun
>
> On Wed, 31 Oct 2018 at 00:03, 徐河松 <xuhes...@koolearn-inc.com> wrote:
>
>> Hi,Friends
>>
>>
>>
>> When I running hive on spark ,getting these errors:
>>
>> ExecutorLostFailure (executor 8 exited caused by one of the running
>> tasks) Reason: Container marked as failed:
>> container_1534244004648_46447_01_000012 on host: zgc-e14-71.54-hadoop.cn.
>> Exit status: 143. Diagnostics: Container
>> [pid=168012,containerID=container_1534244004648_46447_01_000012] is running
>> beyond physical memory limits. Current usage: 8.0 GB of 8 GB physical
>> memory used; 9.8 GB of 32 GB virtual memory used. Killing container.
>>
>>
>>
>>  Any help would be appreciated.
>>
>

Reply via email to