[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14289213#comment-14289213
 ] 

Branimir Lambov commented on CASSANDRA-6809:
--------------------------------------------

Thank you, I did not realise you are interested in parallelism between segments 
only. Of course, what you suggest is the right solution if we are limited to 
that; I approached the problem with the assumption that we need shorter 
sections (of the same segment) that are to progress in parallel. I can see that 
this should work well enough with large sync periods, including the 10s default.

I am happy to continue with either approach, or without multithreaded 
compression altogether. I am now going back to addressing the individual issues 
Ariel raised.


> Compressed Commit Log
> ---------------------
>
>                 Key: CASSANDRA-6809
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Benedict
>            Assignee: Branimir Lambov
>            Priority: Minor
>              Labels: performance
>             Fix For: 3.0
>
>         Attachments: ComitLogStress.java, logtest.txt
>
>
> It seems an unnecessary oversight that we don't compress the commit log. 
> Doing so should improve throughput, but some care will need to be taken to 
> ensure we use as much of a segment as possible. I propose decoupling the 
> writing of the records from the segments. Basically write into a (queue of) 
> DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
> MB written to the CL (where X is ordinarily CLS size), and then pack as many 
> of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to