[ 
https://issues.apache.org/jira/browse/PHOENIX-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334458#comment-15334458
 ] 

Lars Hofhansl commented on PHOENIX-3000:
----------------------------------------

I remember there's some memory management code in Phoenix. Can you point me to 
that [~giacomotaylor].
I'd like to limit the maximum amount of memory we'd use here, and rather fail 
the query than impacting the region server(s), and I remember there was code 
somewhere to configure the amount of heap Phoenix allows itself to use.

As for the attached patch, it addresses two issues:
# if block encoding is used it guards against holding a reference to the 
backing array of a Cell with a large value when only the key is needed
# without block encoding (but possibly compression) we avoid holding a 
reference to the entire backing HFileBlock.

In both cases it'll only make a copy when key uses <= 10% of whatever the 
backing array is.


> Reduce memory consumption during DISTINCT aggregation
> -----------------------------------------------------
>
>                 Key: PHOENIX-3000
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3000
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>         Attachments: 3000.txt
>
>
> In {{DistinctValueWithCountServerAggregator.aggregate}} we hold on the ptr 
> handed to us from HBase.
> Note that this pointer points into an HFile Block, and hence we hold onto the 
> entire block for the duration of the aggregation.
> If the column has high cardinality we might attempt holding the entire table 
> in memory in the extreme case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to