[ 
https://issues.apache.org/jira/browse/HDFS-16538?focusedWorklogId=757118&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-757118
 ]

ASF GitHub Bot logged work on HDFS-16538:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Apr/22 17:28
            Start Date: 14/Apr/22 17:28
    Worklog Time Spent: 10m 
      Work Description: hadoop-yetus commented on PR #4167:
URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1099446513

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   6m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   6m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   6m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 234m 43s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 389m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4167 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a7b6b3da85bb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 22359a90c8e8cd1dce2291ba8b69ca0a25161872 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/3/testReport/ |
   | Max. process+thread count | 3058 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
-------------------

    Worklog Id:     (was: 757118)
    Time Spent: 1h  (was: 50m)

>  EC decoding failed due to not enough valid inputs
> --------------------------------------------------
>
>                 Key: HDFS-16538
>                 URL: https://issues.apache.org/jira/browse/HDFS-16538
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: qinyuren
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, we found this error if the #StripeReader.readStripe() have more 
> than one block read failed.
> We use the EC policy ec(6+3) in our cluster.
> {code:java}
> Caused by: org.apache.hadoop.HadoopIllegalArgumentException: No enough valid 
> inputs are provided, not recoverable
>         at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.checkInputBuffers(ByteBufferDecodingState.java:119)
>         at 
> org.apache.hadoop.io.erasurecode.rawcoder.ByteBufferDecodingState.<init>(ByteBufferDecodingState.java:47)
>         at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:86)
>         at 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder.decode(RawErasureDecoder.java:170)
>         at 
> org.apache.hadoop.hdfs.StripeReader.decodeAndFillBuffer(StripeReader.java:462)
>         at 
> org.apache.hadoop.hdfs.StatefulStripeReader.decode(StatefulStripeReader.java:94)
>         at 
> org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:406)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:327)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:420)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:892)
>         at java.base/java.io.DataInputStream.read(DataInputStream.java:149)
>         at java.base/java.io.DataInputStream.read(DataInputStream.java:149) 
> {code}
>  
> {code:java}
> while (!futures.isEmpty()) {
>   try {
>     StripingChunkReadResult r = StripedBlockUtil
>         .getNextCompletedStripedRead(service, futures, 0);
>     dfsStripedInputStream.updateReadStats(r.getReadStats());
>     DFSClient.LOG.debug("Read task returned: {}, for stripe {}",
>         r, alignedStripe);
>     StripingChunk returnedChunk = alignedStripe.chunks[r.index];
>     Preconditions.checkNotNull(returnedChunk);
>     Preconditions.checkState(returnedChunk.state == StripingChunk.PENDING);
>     if (r.state == StripingChunkReadResult.SUCCESSFUL) {
>       returnedChunk.state = StripingChunk.FETCHED;
>       alignedStripe.fetchedChunksNum++;
>       updateState4SuccessRead(r);
>       if (alignedStripe.fetchedChunksNum == dataBlkNum) {
>         clearFutures();
>         break;
>       }
>     } else {
>       returnedChunk.state = StripingChunk.MISSING;
>       // close the corresponding reader
>       dfsStripedInputStream.closeReader(readerInfos[r.index]);
>       final int missing = alignedStripe.missingChunksNum;
>       alignedStripe.missingChunksNum++;
>       checkMissingBlocks();
>       readDataForDecoding();
>       readParityChunks(alignedStripe.missingChunksNum - missing);
>     } {code}
> This error can be trigger by #StatefulStripeReader.decode.
> The reason is that:
>  # If there are more than one *data block* read failed, the 
> #readDataForDecoding will be called multiple times;
>  # The *decodeInputs array* will be initialized repeatedly.
>  # The *parity* *data* in *decodeInputs array* which filled by 
> #readParityChunks previously will be set to null.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to