rustyrazorblade commented on code in PR #3606:
URL: https://github.com/apache/cassandra/pull/3606#discussion_r1850646455


##########
src/java/org/apache/cassandra/io/util/CompressedChunkReader.java:
##########
@@ -117,8 +141,23 @@ public void readChunk(long position, ByteBuffer 
uncompressed)
                 {
                     ByteBuffer compressed = bufferHolder.getBuffer(length);
 
-                    if (channel.read(compressed, chunk.offset) != length)
-                        throw new CorruptBlockException(channel.filePath(), 
chunk);
+                    if (readAheadBuffer != null && readAheadBuffer.hasBuffer())
+                    {
+                        int copied = 0;
+                        while (copied < length) {

Review Comment:
   Hmm... hard for me to say.  I was previously under the impression that there 
wasn't a good reason to use a larger size for LZ4 because it had some upper 
limit on what it would actually compress, but when I looked at the docs it gave 
me the impression that it can handle fairly large buffers, so maybe there's a 
good reason to use it.  If you've got some fairly large partitions and you're 
only slicing them, there could be a benefit to it.  I lean towards no warning, 
but if you feel strongly about it I'm not opposed.  I don't know what action a 
user would take if they have a valid use case for the large buffer, and there 
shouldn't be any real perf loss from not hitting the internal RA code path if 
the reads are large enough.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: pr-unsubscr...@cassandra.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: pr-unsubscr...@cassandra.apache.org
For additional commands, e-mail: pr-h...@cassandra.apache.org

Reply via email to