[ 
https://issues.apache.org/jira/browse/CASSANDRA-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13140207#comment-13140207
 ] 

Mck SembWever commented on CASSANDRA-3427:
------------------------------------------

This is unfortunately a showstopper for our hadoop jobs querying our production 
cluster.

With 1.0.1 is there any workaround for this issue?
Is it correct that this "compressed block offsets" totals to 
  (<sstable-size> / <chunk_length>) * 8bytes

Therefore a change to a higher chunk_length should be an intermediate 
workaround?
                
> CompressionMetadata is not shared across threads, we create a new one for 
> each read
> -----------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-3427
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3427
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.0.1
>            Reporter: Sylvain Lebresne
>            Assignee: Sylvain Lebresne
>             Fix For: 1.0.2
>
>
> The CompressionMetada holds the compressed block offsets in memory. Without 
> being absolutely huge, this is still of non-negligible size as soon as you 
> have a bit of data in the DB. Reallocating this for each read is a very bad 
> idea.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to