Here is the background of your issue: https://drill.apache.org/docs/sort-based-and-hash-based-memory-constrained-operators/#spill-to-disk
HashAgg introduced a Spill-to-disk capability in 1.11.0 that allows for Drill to run a query's HashAgg in a memory constrained environment. The memory required for the operator is based on the cumulative memory required by the operator's minor fragments (I believe it is 32MB per fragment). The message you get is because this total exceeds the calculated memory. With this you have two options. 1. Reduce the number of minor fragments such that the total is within the available memory, *or* increase the memory per query per node (planner.memory.max_query_memory_per_node). 2. Set the fallback as *TRUE* (default is *FALSE*) and let the operator run with unconstrained memory (i.e. `planner.memory.max_query_memory_per_node` is not honoured) My recommendation is to go with #1. Going with #2 will risk instability which is worse than a query failing IMHO. On Sun, Mar 11, 2018 at 11:56 AM, Anup Tiwari <[email protected]> wrote: > Hi All, > I recently upgraded from 1.10.0 to 1.12.0 and in my one of query I got > below > error :- > INFO o.a.d.e.p.i.aggregate.HashAggregator - User Error Occurred: Not > enough > memory for internal partitioning and fallback mechanism for HashAgg to use > unbounded memory is disabled. Either enable fallback config > drill.exec.hashagg.fallback.enabled using Alter session/system command or > increase memory limit for Drillbit > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: Not > enough > memory for internal partitioning and fallback mechanism for HashAgg to use > unbounded memory is disabled. Either enable fallback config > drill.exec.hashagg.fallback.enabled using Alter session/system command or > increase memory limit for Drillbit > > Can anybody tell me working of "drill.exec.hashagg.fallback.enabled" > variable. > Should we always set it to true as it is false by default? > Regards, > Anup Tiwari
