jrwest commented on code in PR #3606:
URL: https://github.com/apache/cassandra/pull/3606#discussion_r1811682134
##########
src/java/org/apache/cassandra/io/util/CompressedChunkReader.java:
##########
@@ -117,8 +141,23 @@ public void readChunk(long position, ByteBuffer
uncompressed)
{
ByteBuffer compressed = bufferHolder.getBuffer(length);
- if (channel.read(compressed, chunk.offset) != length)
- throw new CorruptBlockException(channel.filePath(),
chunk);
+ if (readAheadBuffer != null && readAheadBuffer.hasBuffer())
+ {
+ int copied = 0;
+ while (copied < length) {
Review Comment:
that brings up an interesting point in terms of validation. the read ahead
buffer size is a cassandra.yaml setting but chunk size is table level. Some
thoughts/questions:
* Do we make the buffer size a table level setting with a default in the
YAML? I don't love it as a table level parameter because its more driven by the
disks than the data shape.
* if we leave it how it is what do we do if someone creates or modifies a
table such that chunk size > buffer size. This should be rare because buffer
size should be 64k or greate but still we have to handle it. One option I see
is to log and warn.
Thoughts @maedhroz @dcapwell @rustyrazorblade
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]