1. you are getting this error because the Drillbit is running out of direct
memory. It's thrown by Netty when it couldn't allocate a new chunk of
direct memory from the system. I know for each query, the allocator will
enforce the query's limit. But I'm not sure we actually properly compute
those limits to not exceed the total direct memory limit.
2. when we hit a channel closed exception, all fragments that were
transmitting on that channel will most likely fail even though they didn't
run out of memory. It's hard to tell where the memory went without more
information about the queries you were trying to run

On Fri, May 13, 2016 at 3:45 PM, rahul challapalli <
[email protected]> wrote:

> Drillers,
>
> I was executing 20 queries using 10 concurrent clients on an 8 node
> cluster. First 10 queries succeed and the remaining 10 queries fail with
> "ChannelClosedException". The logs suggested that all the fragments running
> on one node hit an "java.lang.OutOfMemoryError: Direct buffer memory". 2
> questions here.
>    1. Can someone explain why we are even seeing this error. Shouldn't the
> allocator detect this condition upfront?
>    2. Why did all the fragments fail. Where did the memory go?
>
> - Rahul
>



-- 

Abdelhakim Deneche

Software Engineer

  <http://www.mapr.com/>


Now Available - Free Hadoop On-Demand Training
<http://www.mapr.com/training?utm_source=Email&utm_medium=Signature&utm_campaign=Free%20available>

Reply via email to