viirya commented on a change in pull request #2201:
URL: https://github.com/apache/hadoop/pull/2201#discussion_r485253169
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##########
@@ -276,13 +258,27 @@ public void end() {
// do nothing
}
- private native static void initIDs();
+ private int decompressBytesDirect() throws IOException {
+ if (compressedDirectBufLen == 0) {
+ return 0;
+ } else {
+ // Set the position and limit of `compressedDirectBuf` for reading
+ compressedDirectBuf.position(0).limit(compressedDirectBufLen);
+ // There is compressed input, decompress it now.
+ int size = Snappy.uncompressedLength((ByteBuffer) compressedDirectBuf);
+ if (size > uncompressedDirectBuf.capacity()) {
Review comment:
Should we check with `uncompressedDirectBuf.remaining` instead of
`capacity`?
##########
File path:
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##########
@@ -276,13 +258,27 @@ public void end() {
// do nothing
}
- private native static void initIDs();
+ private int decompressBytesDirect() throws IOException {
+ if (compressedDirectBufLen == 0) {
+ return 0;
+ } else {
+ // Set the position and limit of `compressedDirectBuf` for reading
+ compressedDirectBuf.position(0).limit(compressedDirectBufLen);
Review comment:
I'm not sure we need to set position and limit here? If
`compressedDirectBuf` already has been set with position and limit before
calling `decompressBytesDirect`? Won't we read wrong data from this buffer?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]