Github user Ben-Zvi commented on a diff in the pull request:

    https://github.com/apache/drill/pull/761#discussion_r103068553
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/xsort/managed/ExternalSortBatch.java
 ---
    @@ -934,6 +1005,14 @@ private void updateMemoryEstimates(long memoryDelta, 
RecordBatchSizer sizer) {
         long origInputBatchSize = estimatedInputBatchSize;
         estimatedInputBatchSize = Math.max(estimatedInputBatchSize, 
actualBatchSize);
     
    +    // The row width may end up as zero if all fields are nulls or some
    +    // other unusual situation. In this case, assume a width of 10 just
    +    // to avoid lots of special case code.
    +
    +    if (estimatedRowWidth == 0) {
    +      estimatedRowWidth = 10;
    --- End diff --
    
    Where is estimatedRowWidth  being set ?  Could there be an extreme 
situation (e.g. too many columns) such that we do write much more than 10, and 
thus all the following computations are off ?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to