[
https://issues.apache.org/jira/browse/HDFS-16970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707728#comment-17707728
]
ASF GitHub Bot commented on HDFS-16970:
---------------------------------------
hadoop-yetus commented on PR #5526:
URL: https://github.com/apache/hadoop/pull/5526#issuecomment-1493449678
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 36s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include
any new or modified tests. Please justify why no new tests are needed for this
patch. Also please list what manual steps were performed to verify this patch.
|
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 39m 41s | | trunk passed |
| +1 :green_heart: | compile | 1m 0s | | trunk passed with JDK
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | compile | 0m 56s | | trunk passed with JDK
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| +1 :green_heart: | checkstyle | 0m 35s | | trunk passed |
| +1 :green_heart: | mvnsite | 1m 2s | | trunk passed |
| +1 :green_heart: | javadoc | 0m 52s | | trunk passed with JDK
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| +1 :green_heart: | spotbugs | 2m 43s | | trunk passed |
| +1 :green_heart: | shadedclient | 22m 0s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 0m 51s | | the patch passed |
| +1 :green_heart: | compile | 0m 51s | | the patch passed with JDK
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javac | 0m 51s | | the patch passed |
| +1 :green_heart: | compile | 0m 48s | | the patch passed with JDK
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| +1 :green_heart: | javac | 0m 48s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 0m 18s | | the patch passed |
| +1 :green_heart: | mvnsite | 0m 50s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 34s | | the patch passed with JDK
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| +1 :green_heart: | spotbugs | 2m 31s | | the patch passed |
| +1 :green_heart: | shadedclient | 21m 46s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 2m 24s | | hadoop-hdfs-client in the patch
passed. |
| +1 :green_heart: | asflicense | 0m 37s | | The patch does not
generate ASF License warnings. |
| | | 101m 46s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.42 ServerAPI=1.42 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5526/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/5526 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
| uname | Linux 42b1c5ba41c4 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / c12bb0d649e251bbf381fca170cf44a29f69c49a |
| Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5526/1/testReport/ |
| Max. process+thread count | 560 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U:
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5526/1/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> EC: client copy wrong buffer from decode output during pread
> ------------------------------------------------------------
>
> Key: HDFS-16970
> URL: https://issues.apache.org/jira/browse/HDFS-16970
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: dfsclient, ec, erasure-coding
> Affects Versions: 3.3.4
> Reporter: MingHui Luo
> Priority: Critical
> Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5, 3.2.5
>
>
> When dfsStripedInputStream do pread from a striped block group and read
> internal block timeout, so will read parity block for decode and fill
> original chunk buffer with decoded data.
> Here try to fill original chunk buffer with decoded data, but get wrong data.
> The reason is that
> 1.original chunk buffer already read some bytes before timeout from
> blockReader
> 2.chunkBytebuffer's slice always fill begin 0 position of decodeByteBuffer
> slice bytebuffer will fill from wrong decodeByteBuffer position, so will get
> wrong data from pread.
> {code:java}
> 23/03/21 06:31:11 WARN [StripedRead-24] DFSClient: Exception while reading
> from BP-xxx:blk_-9xxx_xxx of file_xxx from
> DatanodeInfoWithStorage[10.xxx.xx.xx:50010,DS-xxx,DISK]
> java.net.SocketTimeoutException: 10000 millis timeout while waiting for
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
> local=/10.xxx.xx.xx:51426 remote=/10.xxx.xx.xx:50010]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:256)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:207)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
> at
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:221)
> at
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:201)
> at
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:180)
> at
> org.apache.hadoop.hdfs.ByteBufferStrategy.readFromBlock(ReaderStrategy.java:172)
> at org.apache.hadoop.hdfs.StripeReader.readToBuffer(StripeReader.java:240)
> at
> org.apache.hadoop.hdfs.StripeReader.lambda$readCells$0(StripeReader.java:286)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748) {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]