[ https://issues.apache.org/jira/browse/ORC-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16397138#comment-16397138 ]
Owen O'Malley commented on ORC-310: ----------------------------------- Sorry, I'm just getting back to this. So the problem is that the Hadoop codec doesn't properly reset() and needs to be removed from the pool? Is it the DirectDecompressor? Snappy? Zlib? > better error handling and lifecycle management for codecs > --------------------------------------------------------- > > Key: ORC-310 > URL: https://issues.apache.org/jira/browse/ORC-310 > Project: ORC > Issue Type: Bug > Reporter: Sergey Shelukhin > Assignee: Sergey Shelukhin > Priority: Major > > When there's a failure potentially involving the codec, the codec object may > be left in bad state and should not be reused (esp. given that Hadoop codecs > are brittle w.r.t. how they maintain state). > The codecs can be closed on error instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)