[
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835752#comment-17835752
]
ASF GitHub Bot commented on HDFS-17455:
---------------------------------------
hadoop-yetus commented on PR #6710:
URL: https://github.com/apache/hadoop/pull/6710#issuecomment-2047669969
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 34s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 1 new or modified test files. |
|||| _ trunk Compile Tests _ |
| +0 :ok: | mvndep | 14m 17s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 32m 44s | | trunk passed |
| +1 :green_heart: | compile | 5m 31s | | trunk passed with JDK
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 |
| +1 :green_heart: | compile | 5m 19s | | trunk passed with JDK
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| +1 :green_heart: | checkstyle | 1m 26s | | trunk passed |
| +1 :green_heart: | mvnsite | 2m 25s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 |
| +1 :green_heart: | javadoc | 2m 24s | | trunk passed with JDK
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| -1 :x: | spotbugs | 2m 35s |
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
| hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs
warnings. |
| +1 :green_heart: | shadedclient | 35m 44s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 2m 1s | | the patch passed |
| +1 :green_heart: | compile | 5m 21s | | the patch passed with JDK
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 |
| +1 :green_heart: | javac | 5m 21s | | the patch passed |
| +1 :green_heart: | compile | 5m 11s | | the patch passed with JDK
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| +1 :green_heart: | javac | 5m 11s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 1m 14s | | the patch passed |
| +1 :green_heart: | mvnsite | 2m 4s | | the patch passed |
| +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 |
| +1 :green_heart: | javadoc | 2m 12s | | the patch passed with JDK
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| +1 :green_heart: | spotbugs | 5m 55s | | the patch passed |
| +1 :green_heart: | shadedclient | 35m 32s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 2m 28s | | hadoop-hdfs-client in the patch
passed. |
| -1 :x: | unit | 231m 8s |
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
| hadoop-hdfs in the patch passed. |
| +1 :green_heart: | asflicense | 0m 47s | | The patch does not
generate ASF License warnings. |
| | | 405m 41s | | |
| Reason | Tests |
|-------:|:------|
| Failed junit tests |
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.45 ServerAPI=1.45 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/6710 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
| uname | Linux eae96800de5f 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / cc6f05f5edc79189eaa3c0bce002670044e55d4b |
| Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/testReport/ |
| Max. process+thread count | 3594 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt
> -------------------------------------------------------------------------
>
> Key: HDFS-17455
> URL: https://issues.apache.org/jira/browse/HDFS-17455
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Haiyang Hu
> Assignee: Haiyang Hu
> Priority: Major
> Labels: pull-request-available
>
> When the client read data, connect to the datanode, because at this time the
> datanode access token is invalid will throw InvalidBlockTokenException. At
> this time, when call fetchBlockAt method will throw
> java.lang.IndexOutOfBoundsException causing read data failed.
> *Root case:*
> * The HDFS file contains only one RBW block, with a block data size of 2048KB.
> * The client open this file and seeks to the offset of 1024KB to read data.
> * Call DFSInputStream#getBlockReader method connect to the datanode, because
> at this time the datanode access token is invalid will throw
> InvalidBlockTokenException., and call DFSInputStream#fetchBlockAt will throw
> java.lang.IndexOutOfBoundsException.
> {code:java}
> private synchronized DatanodeInfo blockSeekTo(long target)
> throws IOException {
> if (target >= getFileLength()) {
> // the target size is smaller than fileLength (completeBlockSize +
> lastBlockBeingWrittenLength),
> // here at this time target is 1024 and getFileLength is 2048
> throw new IOException("Attempted to read past end of file");
> }
> ...
> while (true) {
> ...
> try {
> blockReader = getBlockReader(targetBlock, offsetIntoBlock,
> targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
> storageType, chosenNode);
> if(connectFailedOnce) {
> DFSClient.LOG.info("Successfully connected to " + targetAddr +
> " for " + targetBlock.getBlock());
> }
> return chosenNode;
> } catch (IOException ex) {
> ...
> } else if (refetchToken > 0 && tokenRefetchNeeded(ex, targetAddr)) {
> refetchToken--;
> // Here will catch InvalidBlockTokenException.
> fetchBlockAt(target);
> } else {
> ...
> }
> }
> }
> }
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
> throws IOException {
> maybeRegisterBlockRefresh();
> synchronized(infoLock) {
> // Here the locatedBlocks only contains one locatedBlock, at this time
> the offset is 1024 and fileLength is 0,
> // so the targetBlockIdx is -2
> int targetBlockIdx = locatedBlocks.findBlock(offset);
> if (targetBlockIdx < 0) { // block is not cached
> targetBlockIdx = LocatedBlocks.getInsertIndex(targetBlockIdx);
> // Here the targetBlockIdx is 1;
> useCache = false;
> }
> if (!useCache) { // fetch blocks
> final LocatedBlocks newBlocks = (length == 0)
> ? dfsClient.getLocatedBlocks(src, offset)
> : dfsClient.getLocatedBlocks(src, offset, length);
> if (newBlocks == null || newBlocks.locatedBlockCount() == 0) {
> throw new EOFException("Could not find target position " + offset);
> }
> // Update the LastLocatedBlock, if offset is for last block.
> if (offset >= locatedBlocks.getFileLength()) {
> setLocatedBlocksFields(newBlocks, getLastBlockLength(newBlocks));
> } else {
> locatedBlocks.insertRange(targetBlockIdx,
> newBlocks.getLocatedBlocks());
> }
> }
> // Here the locatedBlocks only contains one locatedBlock, so will throw
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> return locatedBlocks.get(targetBlockIdx);
> }
> }
> {code}
> The client exception:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> at
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
> at
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
> at
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:266)
> at java.base/java.util.Objects.checkIndex(Objects.java:359)
> at java.base/java.util.ArrayList.get(ArrayList.java:427)
> at
> org.apache.hadoop.hdfs.protocol.LocatedBlocks.get(LocatedBlocks.java:87)
> at
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockAt(DFSInputStream.java:569)
> at
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockAt(DFSInputStream.java:540)
> at
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:704)
> at
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:884)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:957)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:804)
> {code}
> The datanode exception:
> {code:java}
> 2024-03-27 15:56:35,477 WARN datanode.DataNode
> (DataXceiver.java:checkAccess(1487)) [DataXceiver for client
> DFSClient_NONMAPREDUCE_475786505_1 at /xxx [Sending block
> BP-xxx:blk_1138933918_65194340]] - Block token verification failed:
> op=READ_BLOCK, remoteAddress=/XXX, message=Can't re-compute password for
> block_token_identifier (expiryDate=1711562193469, keyId=1775816931,
> userId=test, blockPoolId=BP-xxx-xxx-xxx, blockId=1138933918, access
> modes=[READ], storageTypes= [SSD, SSD, SSD], storageIds= [DS-xxx1,
> DS-xxx2,DS-xxx3]), since the required block key (keyID=1775816931) doesn't
> exist
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]