Hi Anup


Can you share details about the memory allocations (JVM, etc) you have for
Drill, in addition to the cluster size? Also provide the platform details
(e.g. Hadoop version), and the profiles for the succeeded and failed
queries?



i.e.  the JSON of these queries ( e.g.
http://drillbit:8047/profiles/<queryProfileId>.json


Thanks

Kunal


On Mon, Mar 12, 2018 at 9:34 AM, Sorabh Hamirwasia <shamirwa...@mapr.com>
wrote:

> With the session option set as `drill.exec.hashagg.fallback.enabled`=TRUE;
> means HashAgg will not honor the operator memory limit which it was
> assigned to (thus not spilling to disk) and will end up consuming unbounded
> memory.
>
>
> Thanks,
> Sorabh
>
> ________________________________
> From: Anup Tiwari <anup.tiw...@games24x7.com>
> Sent: Monday, March 12, 2018 6:45:12 AM
> To: user@drill.apache.org
> Subject: Re: [Drill 1.12.0] : RESOURCE ERROR: Not enough memory for
> internal partitioning and fallback mechanism for HashAgg to use unbounded
> memory is disabled
>
> Hi Kunal,
> I have executed below command and query got executed in 38.763 sec.
> alter session set `drill.exec.hashagg.fallback.enabled`=TRUE;
> Can you tell me what is the problems in setting this variable? Since you
> have
> mentioned it will risk instability.
>
>
>
>
>
> On Mon, Mar 12, 2018 6:27 PM, Anup Tiwari anup.tiw...@games24x7.com
> wrote:
> Hi Kunal,
> I am still getting this error for some other query and i have increased
> planner.memory.max_query_memory_per_node variable from 2 GB to 10 GB on
> session
> level but still getting this issue.
> Can you tell me how this was getting handled in Earlier Drill
> Versions(<1.11.0)?
>
>
>
>
>
>
> On Mon, Mar 12, 2018 1:59 PM, Anup Tiwari anup.tiw...@games24x7.com
> wrote:
> Hi Kunal,
> Thanks for info and i went with option 1 and increased
> planner.memory.max_query_memory_per_node and now queries are working
> fine. Will
> let you in case of any issues.
>
>
>
>
>
> On Mon, Mar 12, 2018 2:30 AM, Kunal Khatua ku...@apache.org  wrote:
> Here is the background of your issue:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__drill.
> apache.org_docs_sort-2Dbased-2Dand-2Dhash-2Dbased-2Dmemory-
> 2Dconstrained-2Doperators_-23spill-2Dto-2Ddisk&d=DwIFAg&
> c=cskdkSMqhcnjZxdQVpwTXg&r=gRpEl0WzXE3EMrwj0KFbZXGXRyadOthF2jlYxvhTlQg&m=
> fNr6MdcTiP3Fs79q-dStFmG0i2SdbdG_XlPwC9Tlbew&s=
> kf8VuMh9Xu0v8rVXjOX3dGquDjZs94gncVJLo7P-bu8&e=
>
>
>
>
> HashAgg introduced a Spill-to-disk capability in 1.11.0 that allows for
>
> Drill to run a query's HashAgg in a memory constrained environment. The
>
> memory required for the operator is based on the cumulative memory required
>
> by the operator's minor fragments (I believe it is 32MB per fragment).
>
>
>
>
> The message you get is because this total exceeds the calculated memory.
>
> With this you have two options.
>
>
>
>
>   1. Reduce the number of minor fragments such that the total is within
>
>   the available memory, *or* increase the memory per query per node
>
>   (planner.memory.max_query_memory_per_node).
>
>   2. Set the fallback as *TRUE* (default is *FALSE*) and let the operator
>
>   run with unconstrained memory
>
>   (i.e. `planner.memory.max_query_memory_per_node` is not honoured)
>
>
>
>
> My recommendation is to go with #1. Going with #2 will risk instability
>
> which is worse than a query failing IMHO.
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Sun, Mar 11, 2018 at 11:56 AM, Anup Tiwari <anup.tiw...@games24x7.com>
>
> wrote:
>
>
>
>
> > Hi All,
>
> > I recently upgraded from 1.10.0 to 1.12.0 and in my one of query I got
>
> > below
>
> > error :-
>
> > INFO o.a.d.e.p.i.aggregate.HashAggregator - User Error Occurred: Not
>
> > enough
>
> > memory for internal partitioning and fallback mechanism for HashAgg to
> use
>
> > unbounded memory is disabled. Either enable fallback config
>
> > drill.exec.hashagg.fallback.enabled using Alter session/system command
> or
>
> > increase memory limit for Drillbit
>
> > org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: Not
>
> > enough
>
> > memory for internal partitioning and fallback mechanism for HashAgg to
> use
>
> > unbounded memory is disabled. Either enable fallback config
>
> > drill.exec.hashagg.fallback.enabled using Alter session/system command
> or
>
> > increase memory limit for Drillbit
>
> >
>
> > Can anybody tell me working of "drill.exec.hashagg.fallback.enabled"
>
> > variable.
>
> > Should we always set it to true as it is false by default?
>
> > Regards,
>
> > Anup Tiwari
>
>
>
>
>
> Regards,
> Anup Tiwari
>
>
>
> Regards,
> Anup Tiwari
>
>
> Regards,
> Anup Tiwari
>

Reply via email to