[ 
https://issues.apache.org/jira/browse/SPARK-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xukun updated SPARK-9973:
-------------------------
    Description: 
When cache table in memory in spark sql, we allocate too more memory.

InMemoryColumnarTableScan.class
  val initialBufferSize = columnType.defaultSize * batchSize
  ColumnBuilder(attribute.dataType, initialBufferSize, attribute.name, 
useCompression)

BasicColumnBuilder.class
  buffer = ByteBuffer.allocate(4 + size * columnType.defaultSize)

So total allocate size is (4+ size * columnType.defaultSize  * 
columnType.defaultSize), We change it to 4+ size * columnType.defaultSize.

  was:
When cache table in memory in spark sql
allocate wrong buffer size (4+ size * columnType.defaultSize  * 
columnType.defaultSize), right buffer size is  4+ size * columnType.defaultSize.


> wrong buffle size
> -----------------
>
>                 Key: SPARK-9973
>                 URL: https://issues.apache.org/jira/browse/SPARK-9973
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: xukun
>
> When cache table in memory in spark sql, we allocate too more memory.
> InMemoryColumnarTableScan.class
>   val initialBufferSize = columnType.defaultSize * batchSize
>   ColumnBuilder(attribute.dataType, initialBufferSize, attribute.name, 
> useCompression)
> BasicColumnBuilder.class
>   buffer = ByteBuffer.allocate(4 + size * columnType.defaultSize)
> So total allocate size is (4+ size * columnType.defaultSize  * 
> columnType.defaultSize), We change it to 4+ size * columnType.defaultSize.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to