Hi Vovo,
This question may not be relevant to the current version, but i would like
to understand how the batch sizing works starting 1.13.
Does it help resolve Out of memory issues caused in 1.12.
a) Issue with HashAgg:
RESOURCE ERROR: Not enough memory for internal partitioning and fallback
mec
None of the three worked
apache drill (hive.default)> use hive.`default`;
+--+--+
| ok | summary |
+--+--+
| true | Default schema changed to [hive.default] |
+--+---
Hello Khurram,
If Hive storage based authorization is enabled for your deployment
it's may be the reason why you're getting such messages.
In the test it's visible that when at storage level user don't have
rights to query table files Drill shows the confusing error. Here is
the test
https://git
Still not working :
apache drill> use hive.`default`;
+--+--+
| ok | summary |
+--+--+
| true | Default schema changed to [hive.default] |
+--+
Yes please I have a question : can we query Hbase versions from Drill ? if yes
how ?
Thanks
Sami
-Original Message-
From: Volodymyr Vysotskyi [mailto:volody...@apache.org]
Sent: Monday, May 27, 2019 3:23 AM
To: dev ; user
Subject: Apache Drill Hangout - May 28, 2019
EXTERNAL SENDER:
Hi Drillers,
We will have our bi-weekly hangout tomorrow May 28th, at 10 AM PST
(link: https://meet.google.com/yki-iqdf-tai ).
If there are any topics you would like to discuss during the hangout please
respond to this email.
Kind regards,
Volodymyr Vysotskyi