[
https://issues.apache.org/jira/browse/HDFS-14099?focusedWorklogId=701498&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-701498
]
ASF GitHub Bot logged work on HDFS-14099:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 28/Dec/21 12:04
Start Date: 28/Dec/21 12:04
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on pull request #3836:
URL: https://github.com/apache/hadoop/pull/3836#issuecomment-1002060153
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 55s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 1 new or modified test files. |
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 40m 32s | | trunk passed |
| +1 :green_heart: | compile | 30m 31s | | trunk passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | compile | 25m 31s | | trunk passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | checkstyle | 1m 15s | | trunk passed |
| +1 :green_heart: | mvnsite | 1m 56s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 18s | | trunk passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javadoc | 1m 58s | | trunk passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | spotbugs | 3m 5s | | trunk passed |
| +1 :green_heart: | shadedclient | 24m 22s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 0m 58s | | the patch passed |
| +1 :green_heart: | compile | 21m 34s | | the patch passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javac | 21m 34s | | the patch passed |
| +1 :green_heart: | compile | 19m 32s | | the patch passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | javac | 19m 32s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 1m 3s | | the patch passed |
| +1 :green_heart: | mvnsite | 1m 37s | | the patch passed |
| +1 :green_heart: | javadoc | 1m 10s | | the patch passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 |
| +1 :green_heart: | javadoc | 1m 44s | | the patch passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| +1 :green_heart: | spotbugs | 2m 38s | | the patch passed |
| +1 :green_heart: | shadedclient | 22m 19s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| -1 :x: | unit | 17m 25s |
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3836/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
| hadoop-common in the patch passed. |
| +1 :green_heart: | asflicense | 0m 59s | | The patch does not
generate ASF License warnings. |
| | | 222m 41s | | |
| Reason | Tests |
|-------:|:------|
| Failed junit tests | hadoop.ipc.TestIPC |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3836/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/3836 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell |
| uname | Linux e6e4b8ba3ab9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / 7b9875de85f46c179b08dc4384c38ca872117681 |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3836/1/testReport/ |
| Max. process+thread count | 1767 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U:
hadoop-common-project/hadoop-common |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3836/1/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 701498)
Time Spent: 20m (was: 10m)
> Unknown frame descriptor when decompressing multiple frames in
> ZStandardDecompressor
> ------------------------------------------------------------------------------------
>
> Key: HDFS-14099
> URL: https://issues.apache.org/jira/browse/HDFS-14099
> Project: Hadoop HDFS
> Issue Type: Bug
> Environment: Hadoop Version: hadoop-3.0.3
> Java Version: 1.8.0_144
> Reporter: xuzq
> Assignee: xuzq
> Priority: Major
> Labels: pull-request-available
> Attachments: HDFS-14099-trunk-001.patch, HDFS-14099-trunk-002.patch,
> HDFS-14099-trunk-003.patch
>
> Time Spent: 20m
> Remaining Estimate: 0h
>
> We need to use the ZSTD compression algorithm in Hadoop. So I write a simple
> demo like this for testing.
> {code:java}
> // code placeholder
> while ((size = fsDataInputStream.read(bufferV2)) > 0 ) {
> countSize += size;
> if (countSize == 65536 * 8) {
> if(!isFinished) {
> // finish a frame in zstd
> cmpOut.finish();
> isFinished = true;
> }
> fsDataOutputStream.flush();
> fsDataOutputStream.hflush();
> }
> if(isFinished) {
> LOG.info("Will resetState. N=" + n);
> // reset the stream and write again
> cmpOut.resetState();
> isFinished = false;
> }
> cmpOut.write(bufferV2, 0, size);
> bufferV2 = new byte[5 * 1024 * 1024];
> n++;
> }
> {code}
>
> And I use "*hadoop fs -text*" to read this file and failed. The error as
> blow.
> {code:java}
> Exception in thread "main" java.lang.InternalError: Unknown frame descriptor
> at
> org.apache.hadoop.io.compress.zstd.ZStandardDecompressor.inflateBytesDirect(Native
> Method)
> at
> org.apache.hadoop.io.compress.zstd.ZStandardDecompressor.decompress(ZStandardDecompressor.java:181)
> at
> org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:111)
> at
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:98)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
> at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
> at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
> at
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> {code}
>
> So I had to look the code, include jni, then found this bug.
> *ZSTD_initDStream(stream)* method may by called twice in the same *Frame*.
> The first is in *ZStandardDecompressor.c.*
> {code:java}
> if (size == 0) {
> (*env)->SetBooleanField(env, this, ZStandardDecompressor_finished,
> JNI_TRUE);
> size_t result = dlsym_ZSTD_initDStream(stream);
> if (dlsym_ZSTD_isError(result)) {
> THROW(env, "java/lang/InternalError",
> dlsym_ZSTD_getErrorName(result));
> return (jint) 0;
> }
> }
> {code}
> This call here is correct, but *Finished* no longer be set to false, even if
> there is some datas (a new frame) in *CompressedBuffer* or *UserBuffer* need
> to be decompressed.
> The second is in *org.apache.hadoop.io.compress.DecompressorStream* by
> *decompressor.reset()*, because *Finished* is always true after decompressed
> a *Frame*.
> {code:java}
> if (decompressor.finished()) {
> // First see if there was any leftover buffered input from previous
> // stream; if not, attempt to refill buffer. If refill -> EOF, we're
> // all done; else reset, fix up input buffer, and get ready for next
> // concatenated substream/"member".
> int nRemaining = decompressor.getRemaining();
> if (nRemaining == 0) {
> int m = getCompressedData();
> if (m == -1) {
> // apparently the previous end-of-stream was also end-of-file:
> // return success, as if we had never called getCompressedData()
> eof = true;
> return -1;
> }
> decompressor.reset();
> decompressor.setInput(buffer, 0, m);
> lastBytesSent = m;
> } else {
> // looks like it's a concatenated stream: reset low-level zlib (or
> // other engine) and buffers, then "resend" remaining input data
> decompressor.reset();
> int leftoverOffset = lastBytesSent - nRemaining;
> assert (leftoverOffset >= 0);
> // this recopies userBuf -> direct buffer if using native libraries:
> decompressor.setInput(buffer, leftoverOffset, nRemaining);
> // NOTE: this is the one place we do NOT want to save the number
> // of bytes sent (nRemaining here) into lastBytesSent: since we
> // are resending what we've already sent before, offset is nonzero
> // in general (only way it could be zero is if it already equals
> // nRemaining), which would then screw up the offset calculation
> // _next_ time around. IOW, getRemaining() is in terms of the
> // original, zero-offset bufferload, so lastBytesSent must be as
> // well. Cheesy ASCII art:
> //
> // <------------ m, lastBytesSent ----------->
> // +===============================================+
> // buffer: |1111111111|22222222222222222|333333333333| |
> // +===============================================+
> // #1: <-- off -->|<-------- nRemaining --------->
> // #2: <----------- off ----------->|<-- nRem. -->
> // #3: (final substream: nRemaining == 0; eof = true)
> //
> // If lastBytesSent is anything other than m, as shown, then "off"
> // will be calculated incorrectly.
> }
> }{code}
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]