Ankit Singhal created PHOENIX-3289:

             Summary: Region servers crashing randomly with "Native memory 
allocation (malloc) failed to allocate 12288 bytes for committing reserved 
                 Key: PHOENIX-3289
             Project: Phoenix
          Issue Type: Bug
            Reporter: Ankit Singhal

for GROUP BY case, we try to keep the data in memory but if we exceed the total 
memory given for global cache(phoenix.query.maxGlobalMemorySize), then we start 
spilling the data to the disk. We spill and map them with MappedByteBuffer  and 
add bloom filter so that they can be accessed faster.

MappedByteBuffer doesn't release memory immediately on close(),  GC needs to 
collect them when it find such buffers with no references. Though, we are 
closing the channel and file properly and deleting the file at the end, it's 
possible that GCs has not run for a while and this map count is increasing.

PFB, parent ticket talking about the memory leak with the 

And ,related ticket

But now the problems is that we are keeping page size of 4KB only for each 
mmap, which seems to be very small. I don't know the rationale behind keeping 
it so low, may be for performance to get a single key all the time but it can 
result in multiple mmap for even small file , so we can thought of keeping it 
as 64KB page size or more as default but we need to see the impact on 

Workaround to mask the issue, increasing the phoenix.query.maxGlobalMemorySize 
and increasing vm.max_map_count is a solution for get going 

thanks [~rmaruthiyodan] for reducing the territory of the problem by analyzing 
VM hostspot error file and detecting the increase in mmap count during GroupBy 

This message was sent by Atlassian JIRA

Reply via email to