[ 
https://issues.apache.org/jira/browse/ORC-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley reopened ORC-476:
-------------------------------

> Make SearchAgument kryo buffer size configurable
> ------------------------------------------------
>
>                 Key: ORC-476
>                 URL: https://issues.apache.org/jira/browse/ORC-476
>             Project: ORC
>          Issue Type: Improvement
>          Components: MapReduce
>            Reporter: Dhruve Ashar
>            Assignee: Dhruve Ashar
>            Priority: Major
>             Fix For: 1.5.5, 1.6.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> The kryo output buffer size for SArg is hardcoded right now to 100000. The 
> hive implementation sets this to an initial size of 4096 and max size to 
> 10485760.
>  
> Spark started using the apache orc implementation from 2.3 compared to the 
> hive implementation. Spark jobs are failing with buffer overflow error due to 
> the small size of the buffer here. We should make this configurable so that 
> frameworks using the orc implementation can pass the size configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to