rustyrazorblade commented on code in PR #3606:
URL: https://github.com/apache/cassandra/pull/3606#discussion_r1812781989
##########
src/java/org/apache/cassandra/io/util/CompressedChunkReader.java:
##########
@@ -117,8 +141,23 @@ public void readChunk(long position, ByteBuffer
uncompressed)
{
ByteBuffer compressed = bufferHolder.getBuffer(length);
- if (channel.read(compressed, chunk.offset) != length)
- throw new CorruptBlockException(channel.filePath(),
chunk);
+ if (readAheadBuffer != null && readAheadBuffer.hasBuffer())
+ {
+ int copied = 0;
+ while (copied < length) {
Review Comment:
-1 to making it a table level option. No benefit and adds unnecessary
complexity.
I think the problem here stems from allowing the user to specify a small
buffer size. It doesn't make sense to use a buffer less than 256KB. This
isn't just because of the way EBS works (mentioned in the ticket) but because
of the way *every* disk works.
The buffer should be > chunk size but also >= 256 and using less is an
unnecessary configuration option that introduces both edge cases and
sub-optimal performance.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]