[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-10-27 Thread Adam Holmberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holmberg updated CASSANDRA-10520:
--
Labels: messaging-service-bump-required  (was: client-impacting 
messaging-service-bump-required)

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.0
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-09-27 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10520:

Labels: client-impacting messaging-service-bump-required  (was: 
messaging-service-bump-required)

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: client-impacting, messaging-service-bump-required
> Fix For: 4.0
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-09-27 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10520:

Fix Version/s: (was: 4.x)
   4.0

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.0
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-04-02 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-10520:

Status: Ready to Commit  (was: Patch Available)

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-04-02 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-10520:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-02-24 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-10520:

Status: Patch Available  (was: Reopened)

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-02-21 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-10520:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-02-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10520:
-
Status: Ready to Commit  (was: Patch Available)

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-01-18 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-10520:

Attachment: ReadWriteTestCompression.java

Microbenchmark

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
> Attachments: ReadWriteTestCompression.java
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2017-01-18 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-10520:

Status: Patch Available  (was: Open)

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2016-07-14 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10520:
-
Status: Open  (was: Patch Available)

Just set status to "open" as we're still a few days away from 4.0. Feel free to 
set to PA as soon as 4.0 is in sight.

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2016-01-12 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10520:
--
Fix Version/s: (was: 3.0.x)
   4.x

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 4.x
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2015-12-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10520:
-
Labels: messaging-service-bump-required  (was: )

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 3.0.x
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2015-12-02 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-10520:

Component/s: Local Write-Read Paths

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
> Fix For: 3.0.x
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2015-11-30 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10520:

Reviewer: Robert Stupp

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
> Fix For: 3.0.x
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)