We appear to be running out of memory when attempting a large join with an 
order by clause
   

        [Client-1] INFO 
org.apache.drill.jdbc.impl.DrillResultSetImpl$ResultsListener - Query failed:
        org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
org.apache.drill.common.exceptions.DrillRuntimeException: Adding this batch 
causes the total size to exceed max allowed size. Current runningBytes 
1073380040, Incoming batchBytes 630917. maxBytes 1073741824
        Fragment 1:5
 The docs list the max size for hash tables as 

exec.max_hash_table_size 1073741824 Ending size for hash tables. Range: 0 - 
1073741824.

 Does this mean that the largest allowable size has been exceeded and the only 
option is to disable via:
planner.enable_hashjoin

 If so, is there a reason that the max allowable size is 1073741824 bytes?

 Thanks

Reply via email to