[ 
https://issues.apache.org/jira/browse/CASSANDRA-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14999459#comment-14999459
 ] 

Jeff Jirsa commented on CASSANDRA-10680:
----------------------------------------

Initial tests suggest this (commit 61d2630e9950e9abc0d8da3939b280ff44b5ddc0) 
does indeed solve this issue. 


> Deal with small compression chunk size better during streaming plan setup
> -------------------------------------------------------------------------
>
>                 Key: CASSANDRA-10680
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10680
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jeff Jirsa
>            Assignee: Yuki Morishita
>             Fix For: 2.1.x
>
>
> For clusters using small compression chunk size and terabytes of data, the 
> streaming plan calculations will instantiate hundreds of millions of 
> compressionmetadata$chunk objects, which will create unreasonable amounts of 
> heap pressure. Rather than instantiating all of those at once, streaming 
> should instantiate only as many as needed for a single file per table at a 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to