[ 
https://issues.apache.org/jira/browse/ORC-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395864#comment-16395864
 ] 

ASF GitHub Bot commented on ORC-310:
------------------------------------

Github user prasanthj commented on a diff in the pull request:

    https://github.com/apache/orc/pull/222#discussion_r173941203
  
    --- Diff: java/core/src/java/org/apache/orc/impl/RecordReaderUtils.java ---
    @@ -306,15 +330,24 @@ public void releaseBuffer(ByteBuffer buffer) {
     
         @Override
         public DataReader clone() {
    +      if (this.file != null) {
    +        throw new UnsupportedOperationException(
    +            "Cannot clone a DataReader that is already opened");
    +      }
           try {
    -        return (DataReader) super.clone();
    +        DefaultDataReader clone = (DefaultDataReader) super.clone();
    +        // Make sure we don't share the same codec between two readers.
    +        clone.codec = OrcCodecPool.getCodec(clone.compressionKind);
    --- End diff --
    
    can this be moved inside super.clone()?


> better error handling and lifecycle management for codecs
> ---------------------------------------------------------
>
>                 Key: ORC-310
>                 URL: https://issues.apache.org/jira/browse/ORC-310
>             Project: ORC
>          Issue Type: Bug
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>            Priority: Major
>
> When there's a failure potentially involving the codec, the codec object may 
> be left in bad state and should not be reused (esp. given that Hadoop codecs 
> are brittle w.r.t. how they maintain state).
> The codecs can be closed on error instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to