[
https://issues.apache.org/jira/browse/ACCUMULO-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13755053#comment-13755053
]
Mike Drob commented on ACCUMULO-1534:
-------------------------------------
bq. Another option is to modify the hadoop io.file.buffer.size to something
smaller than the current value.
Correction: The property to set might be {{tfile.fs.input.buffer.size}}, in the
hadoop conf?
> Tablet Server using large number of decompressors during a scan
> ---------------------------------------------------------------
>
> Key: ACCUMULO-1534
> URL: https://issues.apache.org/jira/browse/ACCUMULO-1534
> Project: Accumulo
> Issue Type: Bug
> Affects Versions: 1.4.3
> Reporter: Mike Drob
> Fix For: 1.5.1, 1.6.0
>
>
> I believe this issue is similar to ACCUMULO-665. We've run into a situation
> where a complex iterator tree creates a large number of decompressors from
> the underlying CodecPool for serving scans. Each decompressor holds on to a
> large buffer and the total volume ends up killing the tserver.
> We have verified that turning off compression makes this problem go away.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira