Any thoughts on how to troubleshoot this (I have some fat json data going
into the buffers apparently) It's not huge data, just wide/complex (total
size is 1.4 GB) Any thoughts on how to troubleshoot or settings I can use
to work through these errors?
Thanks!
John
Error: SYSTEM ERROR:
Hello Drillers,
There are some great proposed talks for this year's Hadoop summit related
to Drill. Please help to promote Drill in the wider Big Data community by
taking a look through the list and voting for talks that sound good.
You don't need to register or anything to vote, it just asks
John,
Sorry for that, this already work as expected.
Give it a try, this is so easy to deploy
SELECT first_name FROM cp.`employee.json` WHERE contains(first_name,'\w+')
LIMIT 5;
first_name |
---|
Sheri |
Derrick|
Michael|
Maya |
Roberta|
2016-02-04 20:41
Jacques there is one very similar JIRA here
https://issues.apache.org/jira/browse/DRILL-3922 I know this issue still
vexes me.
John
On Wed, Dec 30, 2015 at 2:38 PM, Jacques Nadeau wrote:
> We don't currently have a way to do something equivalent to SELECT KVGEN(*)
> FROM
You see this exception because one of the columns in your dataset is larger
than an individual DrillBuf could store. The hard limit
is Integer.MAX_VALUE bytes. Around the time we are trying to expand one of
the buffers, we notice the allocation request is oversized and fail the
query. It would be
Excuse my basic questions, when you say we are you reference Drill coders?
So what is Integer.MAX_VALUE bytes? Is that a query time setting? Drillbit
setting? Is it editable? How does that value get interpreted for complex
data types (objects and arrays).
Not only would the column be helpful,