[
https://issues.apache.org/jira/browse/PHOENIX-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16545926#comment-16545926
]
ASF GitHub Bot commented on PHOENIX-2405:
-----------------------------------------
Github user solzy commented on the issue:
https://github.com/apache/phoenix/pull/308
*@JamesRTaylor * Thanks for your clarify.
----------------------------------------
Yun Zhang
Best regards!
2018-07-17 6:18 GMT+08:00 James Taylor <[email protected]>:
> *@JamesRTaylor* commented on this pull request.
> ------------------------------
>
> In phoenix-core/src/main/java/org/apache/phoenix/iterate/
> ClientHashAggregatingResultIterator.java
> <https://github.com/apache/phoenix/pull/308#discussion_r202842452>:
>
> > + protected ImmutableBytesWritable getGroupingKey(Tuple tuple,
ImmutableBytesWritable ptr) throws SQLException {
> + try {
> + ImmutableBytesWritable key =
TupleUtil.getConcatenatedValue(tuple, groupByExpressions);
> + ptr.set(key.get(), key.getOffset(), key.getLength());
> + return ptr;
> + } catch (IOException e) {
> + throw new SQLException(e);
> + }
> + }
> +
> + // Copied from ClientGroupedAggregatingResultIterator
> + protected Tuple wrapKeyValueAsResult(KeyValue keyValue) {
> + return new MultiKeyValueTuple(Collections.<Cell>
singletonList(keyValue));
> + }
> +
> + private void populateHash() throws SQLException {
>
> @geraldss <https://github.com/geraldss> - memory management is tracked by
> the GlobalMemoryManager. Operations that potentially use memory allocate
> (and eventually free) a set of MemoryChunk instances. You can see an
> example of this in GroupedAggregateRegionObserver (the runtime code for
> aggregation). If the memory used goes over a threshold
(phoenix.query.maxGlobalMemoryPercentage
> and phoenix.query.maxTenantMemoryPercentage as the allowed percentage of
> Java heap across all queries that is allowed to be used), then the query
> will fail. Most typically, this mechanism is used on the server side as we
> don't typically use a lot of memory on the client-side (as we're mostly
> doing merge joins). One example where we use this on the client side is
for
> our broadcast join implementation (see HashCacheClient) to track memory
> held onto for Hash Join caches.
>
> Classes you may want to look at (or perhaps you already have?):
> OrderedResultIterator and MappedByteBufferSortedQueue. Above a certain
> configurable threshold (phoenix.query.spoolThresholdBytes defaults to
> 20MB), we output results into memory mapped files. Have you tried
> decreasing that threshold?
>
> Couple of JIRAs you may want to take a look at: PHOENIX-2405 (unclear if
> this is still an issue) and PHOENIX-3289. Are you running into issues with
> too many memory mapped files?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/apache/phoenix/pull/308#discussion_r202842452>, or
mute
> the thread
>
<https://github.com/notifications/unsubscribe-auth/AD1phzirsKewFaxyFttqCr5ybJNLnvWdks5uHREqgaJpZM4U_6wx>
> .
>
> Improve performance and stability of server side sort for ORDER BY
> ------------------------------------------------------------------
>
> Key: PHOENIX-2405
> URL: https://issues.apache.org/jira/browse/PHOENIX-2405
> Project: Phoenix
> Issue Type: Bug
> Reporter: James Taylor
> Assignee: Haoran Zhang
> Priority: Major
> Labels: gsoc2016
>
> We currently use memory mapped files to buffer data as it's being sorted in
> an ORDER BY (see MappedByteBufferQueue). The following types of exceptions
> have been seen to occur:
> {code}
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> {code}
> [~apurtell] has read that memory mapped files are not cleaned up after very
> well in Java:
> {quote}
> "Map failed" means the JVM ran out of virtual address space. If you search
> around stack overflow for suggestions on what to do when your app (in this
> case Phoenix) encounters this issue when using mapped buffers, the answers
> tend toward manually cleaning up the mapped buffers or explicitly triggering
> a full GC. See
> http://stackoverflow.com/questions/8553158/prevent-outofmemory-when-using-java-nio-mappedbytebuffer
> for example. There are apparently long standing JVM/JRE problems with
> reclamation of mapped buffers. I think we may want to explore in Phoenix a
> different way to achieve what the current code is doing.
> {quote}
> Instead of using memory mapped files, we could use heap memory, or perhaps
> there are other mechanisms too.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)