shfshihuafeng commented on code in PR #2875:
URL: https://github.com/apache/drill/pull/2875#discussion_r1464211148


##########
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/common/HashPartition.java:
##########
@@ -157,11 +162,11 @@ public HashPartition(FragmentContext context, 
BufferAllocator allocator, Chained
         .build(logger);
     } catch (SchemaChangeException sce) {
       throw new IllegalStateException("Unexpected Schema Change while creating 
a hash table",sce);
-    }
-    this.hjHelper = semiJoin ? null : new HashJoinHelper(context, allocator);
-    tmpBatchesList = new ArrayList<>();
-    if (numPartitions > 1) {
-      allocateNewCurrentBatchAndHV();
+    } catch (OutOfMemoryException oom) {
+      close();

Review Comment:
   ### 1. fix idea
   The design is any operator fails, the entire operator stack is closed. but 
partitions is array which is initialed by null。if hashPartition object is not 
created successfully, it throw exception. so partitions array  data after index 
which is null。
   
   `  for (int part = 0; part < numPartitions; part++) {
         partitions[part] = new HashPartition(context, allocator, baseHashTable,
             buildBatch, probeBatch, semiJoin, RECORDS_PER_BATCH, spillSet, 
part,
             spilledState.getCycle(), numPartitions);
       }`
   
   for example
   
   partitions array length is 32, numPartitions =32 when numPartitions =10 
,throw except. partitions[11-31]  will be null 
    object which index  numPartitions =10 was created  failed ,but it had 
allocater memory.
   
   when calling close() , hashpartion  object which numPartitions =10 can not 
call close,beacause it is null。
   
   
   ### 2. another fix idea
   
     we do  not  throw exception and do not call  close, but catch. we can 
create hash partiotn object . thus when calling close() , we can release。
   but if 
   
   ```
   HashPartition(FragmentContext context, BufferAllocator allocator, 
ChainedHashTable baseHashTable,
                          RecordBatch buildBatch, RecordBatch probeBatch, 
boolean semiJoin,
                          int recordsPerBatch, SpillSet spillSet, int partNum, 
int cycleNum, int numPartitions , boolean **isException** )
   
     } catch (OutOfMemoryException oom) {
        //do not call  close ,do  not  throw except
         isException =false;
       }
   
   AbstractHashBinaryRecordBatch#initializeBuild
       boolean isException = false;
       try {
         for (int part = 0; part < numPartitions; part++) {
           if (isException) {
             break;
           }
           partitions[part] = new HashPartition(context, allocator, 
baseHashTable,
               buildBatch, probeBatch, semiJoin, RECORDS_PER_BATCH, spillSet, 
part,
               spilledState.getCycle(), numPartitions,**isException** );
         }
       } catch (Exception e) {
         isException = true;
       }
       if (isException ){
         throw UserException.memoryError(exceptions[0])
             .message("Failed to allocate hash partition.")
             .build(logger);
       }
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to