[ 
https://issues.apache.org/jira/browse/PHOENIX-3289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15562946#comment-15562946
 ] 

James Taylor commented on PHOENIX-3289:
---------------------------------------

bq. But now the problems is that we are keeping page size of 4KB only for each 
mmap
Agreed - we fixed a similar issue for ORDER BY by increasing this. I think it's 
just an oversight that the value is configured so low. Let's definitely 
increase it.

Also, there has been some work by [~maryannxue] and one of the GSoC students to 
try replacing Memory Mapped file usage with just regular files (the thinking 
being that the OS does a good job of caching these already anyway). See 
PHOENIX-2405. Unfortunately, this work never made it far enough to verify the 
hypothesis and get into the code base.

> Region servers crashing randomly with "Native memory allocation (malloc) 
> failed to allocate 12288 bytes for committing reserved memory"
> ---------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-3289
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3289
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Ankit Singhal
>            Assignee: Ankit Singhal
>
> for GROUP BY case, we try to keep the data in memory but if we exceed the 
> total memory given for global cache(phoenix.query.maxGlobalMemorySize), then 
> we start spilling the data to the disk. We spill and map them with 
> MappedByteBuffer  and add bloom filter so that they can be accessed faster.
> MappedByteBuffer doesn't release memory immediately on close(),  GC needs to 
> collect them when it find such buffers with no references. Though, we are 
> closing the channel and file properly and deleting the file at the end, it's 
> possible that GCs has not run for a while and this map count is increasing.
> PFB, parent ticket talking about the memory leak with the FileChannel.map() 
> function.
> http://bugs.java.com/view_bug.do?bug_id=4724038
> And ,related ticket
> http://bugs.java.com/view_bug.do?bug_id=6558368
> But now the problems is that we are keeping page size of 4KB only for each 
> mmap, which seems to be very small. I don't know the rationale behind keeping 
> it so low, may be for performance to get a single key all the time but it can 
> result in multiple mmap for even small file , so we can thought of keeping it 
> as 64KB page size or more as default but we need to see the impact on 
> performance.
> Workaround to mask the issue, increasing the 
> phoenix.query.maxGlobalMemorySize and increasing vm.max_map_count is a 
> solution for get going 
> thanks [~rmaruthiyodan] for reducing the territory of the problem by 
> analyzing VM hostspot error file and detecting the increase in mmap count 
> during GroupBy queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to