[ 
https://issues.apache.org/jira/browse/HADOOP-19255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17875448#comment-17875448
 ] 

ASF GitHub Bot commented on HADOOP-19255:
-----------------------------------------

hadoop-yetus commented on PR #7009:
URL: https://github.com/apache/hadoop/pull/7009#issuecomment-2301798222

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   |||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  52m 10s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7009/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  19m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  17m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  shadedclient  | 133m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |  18m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |  17m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  shadedclient  |  45m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 34s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 239m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7009/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7009 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 67580bd81d9a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 380d5ce8432769577b78987d3ac726b7dbbd2848 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7009/1/testReport/ |
   | Max. process+thread count | 3137 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7009/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> LZO files cannot be decompressed
> --------------------------------
>
>                 Key: HADOOP-19255
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19255
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: common
>    Affects Versions: 3.4.0
>            Reporter: Shailesh Gupta
>            Priority: Critical
>              Labels: pull-request-available
>
> The following command fails with the below exception:
> hadoop fs -text [file:///home/hadoop/part-ak.lzo]
> {code:java}
> 2024-08-21 05:05:07,418 INFO lzo.GPLNativeCodeLoader: Loaded native gpl 
> library
> 2024-08-21 05:05:08,706 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 049362b7cf53ff5f739d6b1532457f2c6cd495e8]
> 2024-08-21 05:07:01,542 INFO compress.CodecPool: Got brand-new decompressor 
> [.lzo]
> 2024-08-21 05:07:14,558 WARN lzo.LzopInputStream: Incorrect LZO file format: 
> file did not end with four trailing zeroes.
> java.io.IOException: Corrupted uncompressed block
>     at 
> com.hadoop.compression.lzo.LzopInputStream.verifyChecksums(LzopInputStream.java:219)
>     at 
> com.hadoop.compression.lzo.LzopInputStream.close(LzopInputStream.java:342)
>     at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:102)
>     at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:95)
>     at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:383)
>     at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:346)
>     at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:319)
>     at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:301)
>     at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:285)
>     at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121)
>     at org.apache.hadoop.fs.shell.Command.run(Command.java:192)
>     at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:97)
>     at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
> Exception in thread "main" java.lang.InternalError: lzo1x_decompress_safe 
> returned: -5
>     at 
> com.hadoop.compression.lzo.LzoDecompressor.decompressBytesDirect(Native 
> Method)
>     at 
> com.hadoop.compression.lzo.LzoDecompressor.decompress(LzoDecompressor.java:315)
>     at 
> com.hadoop.compression.lzo.LzopDecompressor.decompress(LzopDecompressor.java:122)
>     at 
> com.hadoop.compression.lzo.LzopInputStream.decompress(LzopInputStream.java:252)
>     at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:110)
>     at java.base/java.io.InputStream.read(InputStream.java:218)
>     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:95)
>     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
>     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:132)
>     at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:100)
>     at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:95)
>     at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:383)
>     at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:346)
>     at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:319)
>     at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:301)
>     at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:285)
>     at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121)
>     at org.apache.hadoop.fs.shell.Command.run(Command.java:192)
>     at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:97)
>     at org.apache.hadoop.fs.FsShell.main(FsShell.java:390) {code}
> From my analysis, i was pinpoint to the 
> [change|https://github.com/apache/hadoop/pull/5912/files#diff-268b9968a4db21ac6eeb7bcaef10e4db744d00ba53989fc7251bb3e8d9eac7dfR904]
>  which changed _io.compression.codec.lzo.buffersize_ from 64KB to 256KB.
> Earlier, the default value was being picked from 
> [here|https://github.com/twitter/hadoop-lzo/blob/master/src/main/java/com/hadoop/compression/lzo/LzoCodec.java#L51].
> Let me know if my analysis looks good. What should be the proper approach to 
> fixing it?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to