If you've already set the limit at 4GB, there might be something else going
on. We'll take a look at this in more detail, but you shouldn't expect a
solution soon. (probably 1.14.0, since 1.13.0 is already on its way out for
release).
For now, bumping up the limit as you've done, reducing the
Hi Kunal,
First of all, thanks for such a good explanation, it really helped me
understanding few things.But as you have mentioned that in case of failure
"Drillbits capped at around 1.2GB" and suggested to "increase the
memory-per-query-per-node from the current 2GB to a higher level".
Are you
Hi Kunal,
Please find below link :-
https://drive.google.com/open?id=13NVDqSgDD-Pe6H0smAkvzqktgXURgZF4
SQL File contains platform details and log files contains success/failure logs
of query.
On Wed, Mar 14, 2018 7:51 PM, Kunal Khatua ku...@apache.org wrote:
Hi Anup
Can you share
Hi Anup
Can you share this as a file ? There seems to be some truncation of the
contents.
Share it using some online service like Google Drive or Dropbox, since the
mailing list might not allow for attachments.
Thanks
~ Kunal
On Tue, Mar 13, 2018 at 11:44 PM, Anup Tiwari
JSON Profile when Succeeded :-
{"id":{"part1":2690693429455769721,"part2":6509382378722762087},"type":1,"start":1521007764471,"end":1521007906770,"query":"create
table a_games_log_visit_utm as\nselect\ndistinct\nglv.sessionid,\ncase when
(UFG('utms=', glv.url, '&') <> 'null') then UFG('utms=',
Hi Kunal,
Please find below cluster/platform details :-
Number of Nodes : 5
RAM/Node : 32GBCore/Node : 8DRILL_MAX_DIRECT_MEMORY="20G"DRILL_HEAP="8G"DRILL
VERSION = 1.12.0HADOOP VERSION = 2.7.3ZOOKEEPER VERSION = 3.4.8(Installed in
Distributed Mode on 3
Hi Anup
Can you share details about the memory allocations (JVM, etc) you have for
Drill, in addition to the cluster size? Also provide the platform details
(e.g. Hadoop version), and the profiles for the succeeded and failed
queries?
i.e. the JSON of these queries ( e.g.
With the session option set as `drill.exec.hashagg.fallback.enabled`=TRUE;
means HashAgg will not honor the operator memory limit which it was assigned to
(thus not spilling to disk) and will end up consuming unbounded memory.
Thanks,
Sorabh
From: Anup Tiwari
Hi Kunal,
I have executed below command and query got executed in 38.763 sec.
alter session set `drill.exec.hashagg.fallback.enabled`=TRUE;
Can you tell me what is the problems in setting this variable? Since you have
mentioned it will risk instability.
On Mon, Mar 12, 2018 6:27 PM, Anup
Hi Kunal,
I am still getting this error for some other query and i have increased
planner.memory.max_query_memory_per_node variable from 2 GB to 10 GB on session
level but still getting this issue.
Can you tell me how this was getting handled in Earlier Drill Versions(<1.11.0)?
On Mon,
Hi Kunal,
Thanks for info and i went with option 1 and increased
planner.memory.max_query_memory_per_node and now queries are working fine. Will
let you in case of any issues.
On Mon, Mar 12, 2018 2:30 AM, Kunal Khatua ku...@apache.org wrote:
Here is the background of your issue:
Here is the background of your issue:
https://drill.apache.org/docs/sort-based-and-hash-based-memory-constrained-operators/#spill-to-disk
HashAgg introduced a Spill-to-disk capability in 1.11.0 that allows for
Drill to run a query's HashAgg in a memory constrained environment. The
memory required
Hi All,
I recently upgraded from 1.10.0 to 1.12.0 and in my one of query I got below
error :-
INFO o.a.d.e.p.i.aggregate.HashAggregator - User Error Occurred: Not enough
memory for internal partitioning and fallback mechanism for HashAgg to use
unbounded memory is disabled. Either enable fallback
13 matches
Mail list logo