[
https://issues.apache.org/jira/browse/HADOOP-16768?focusedWorklogId=633304&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-633304
]
ASF GitHub Bot logged work on HADOOP-16768:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 04/Aug/21 05:35
Start Date: 04/Aug/21 05:35
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on pull request #3264:
URL: https://github.com/apache/hadoop/pull/3264#issuecomment-892376931
:confetti_ball: **+1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 11m 14s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 2 new or modified test files. |
|||| _ branch-3.2 Compile Tests _ |
| +0 :ok: | mvndep | 3m 22s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 25m 36s | | branch-3.2 passed |
| +1 :green_heart: | compile | 15m 35s | | branch-3.2 passed |
| +1 :green_heart: | checkstyle | 2m 40s | | branch-3.2 passed |
| +1 :green_heart: | mvnsite | 1m 41s | | branch-3.2 passed |
| +1 :green_heart: | javadoc | 1m 31s | | branch-3.2 passed |
| +0 :ok: | spotbugs | 0m 27s | | branch/hadoop-project no spotbugs
output file (spotbugsXml.xml) |
| +1 :green_heart: | shadedclient | 15m 26s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 1m 3s | | the patch passed |
| +1 :green_heart: | compile | 14m 51s | | the patch passed |
| +1 :green_heart: | javac | 14m 51s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 2m 37s | | the patch passed |
| +1 :green_heart: | mvnsite | 1m 41s | | the patch passed |
| +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML
file. |
| +1 :green_heart: | javadoc | 1m 26s | | the patch passed |
| +0 :ok: | spotbugs | 0m 24s | | hadoop-project has no data from
spotbugs |
| +1 :green_heart: | shadedclient | 16m 1s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 0m 22s | | hadoop-project in the patch
passed. |
| +1 :green_heart: | unit | 16m 1s | | hadoop-common in the patch
passed. |
| +1 :green_heart: | asflicense | 0m 43s | | The patch does not
generate ASF License warnings. |
| | | 136m 59s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3264/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/3264 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
| uname | Linux 1a9fea32a89d 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | branch-3.2 / 19c0c1f752c19f7713eddd482bc337c6bc107d0d |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3264/1/testReport/ |
| Max. process+thread count | 1273 (vs. ulimit of 5500) |
| modules | C: hadoop-project hadoop-common-project/hadoop-common U: . |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3264/1/console |
| versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 633304)
Time Spent: 20m (was: 10m)
> SnappyCompressor test cases wrongly assume that the compressed data is always
> smaller than the input data
> ---------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
> Issue Type: Bug
> Components: io, test
> Environment: X86/Aarch64
> OS: Ubuntu 18.04, CentOS 8
> Snappy 1.1.7
> Reporter: zhao bo
> Assignee: Akira Ajisaka
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Time Spent: 20m
> Remaining Estimate: 0h
>
> *
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
> *
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> *
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
> *
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
> *
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
> *
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!'
> but got un
> expected exception: java.lang.NullPointerException
>
> at
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
> at com.google.common.base.Joiner.toString(Joiner.java:452)
>
> at com.google.common.base.Joiner.appendTo(Joiner.java:109)
>
> at com.google.common.base.Joiner.appendTo(Joiner.java:152)
>
> at com.google.common.base.Joiner.join(Joiner.java:195)
>
> at com.google.common.base.Joiner.join(Joiner.java:185)
> at com.google.common.base.Joiner.join(Joiner.java:211)
> at
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
> at
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
> at
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>
>
> *
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
> *
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> [ERROR]
> testSnappyCompressDecompress(org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor)
> Time elapsed: 0.003 s <<< ERROR!
> java.lang.InternalError: Could not decompress data. Input is invalid.
> at
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native
> Method)
> at
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:235)
> at
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress(TestSnappyCompressorDecompressor.java:192)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]