Github user manishgupta88 commented on the issue:

    https://github.com/apache/carbondata/pull/2595
  
    @xuchuanyin ....Usually in production scenarios driver memory will be less 
than the executor memory. Now we are using unsafe for caching block/blocklet 
dataMap in driver.  Current unsafe memory configured fo executor is getting 
used for driver also which is not a good idea.
    Therefore it is required to separate out driver and executor unsafe memory.
    You can observe the same in spark configuration also that spark has given 
different parameters for configuring driver and executor memory overhead to 
control the unsafe memory usage.
    spark.yarn.driver.memoryOverhead and spark.yarn.executor.memoryOverhead


---

Reply via email to