Hi Kunal,
I have executed below command and query got executed in 38.763 sec.
alter session set `drill.exec.hashagg.fallback.enabled`=TRUE;
Can you tell me what is the problems in setting this variable? Since you have
mentioned it will risk instability.
On Mon, Mar 12, 2018 6:27 PM, Anup Tiwari [email protected] wrote:
Hi Kunal,
I am still getting this error for some other query and i have increased
planner.memory.max_query_memory_per_node variable from 2 GB to 10 GB on session
level but still getting this issue.
Can you tell me how this was getting handled in Earlier Drill Versions(<1.11.0)?
On Mon, Mar 12, 2018 1:59 PM, Anup Tiwari [email protected] wrote:
Hi Kunal,
Thanks for info and i went with option 1 and increased
planner.memory.max_query_memory_per_node and now queries are working fine. Will
let you in case of any issues.
On Mon, Mar 12, 2018 2:30 AM, Kunal Khatua [email protected] wrote:
Here is the background of your issue:
https://drill.apache.org/docs/sort-based-and-hash-based-memory-constrained-operators/#spill-to-disk
HashAgg introduced a Spill-to-disk capability in 1.11.0 that allows for
Drill to run a query's HashAgg in a memory constrained environment. The
memory required for the operator is based on the cumulative memory required
by the operator's minor fragments (I believe it is 32MB per fragment).
The message you get is because this total exceeds the calculated memory.
With this you have two options.
1. Reduce the number of minor fragments such that the total is within
the available memory, *or* increase the memory per query per node
(planner.memory.max_query_memory_per_node).
2. Set the fallback as *TRUE* (default is *FALSE*) and let the operator
run with unconstrained memory
(i.e. `planner.memory.max_query_memory_per_node` is not honoured)
My recommendation is to go with #1. Going with #2 will risk instability
which is worse than a query failing IMHO.
On Sun, Mar 11, 2018 at 11:56 AM, Anup Tiwari <[email protected]>
wrote:
Hi All,
I recently upgraded from 1.10.0 to 1.12.0 and in my one of query I got
below
error :-
INFO o.a.d.e.p.i.aggregate.HashAggregator - User Error Occurred: Not
enough
memory for internal partitioning and fallback mechanism for HashAgg to use
unbounded memory is disabled. Either enable fallback config
drill.exec.hashagg.fallback.enabled using Alter session/system command or
increase memory limit for Drillbit
org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: Not
enough
memory for internal partitioning and fallback mechanism for HashAgg to use
unbounded memory is disabled. Either enable fallback config
drill.exec.hashagg.fallback.enabled using Alter session/system command or
increase memory limit for Drillbit
Can anybody tell me working of "drill.exec.hashagg.fallback.enabled"
variable.
Should we always set it to true as it is false by default?
Regards,
Anup Tiwari
Regards,
Anup Tiwari
Regards,
Anup Tiwari
Regards,
Anup Tiwari