[jira] [Comment Edited] (HDFS-15240) Erasure Coding: dirty buffer causes reconstruction block error
[ https://issues.apache.org/jira/browse/HDFS-15240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239878#comment-17239878 ] Hui Fei edited comment on HDFS-15240 at 11/28/20, 4:29 AM: --- [~marvelrock] Thanks. It's great, [^HDFS-15240.010.patch] looks better and clear. {code} + } catch (InterruptedException e) { + } {code} Maybe we should log exception message, after this +1. was (Author: ferhui): [~marvelrock] Thanks. It's great, [^HDFS-15240.010.patch] looks better and clear. {quote} + } catch (InterruptedException e) { + } {quote} Maybe we should log exception message, after this +1. > Erasure Coding: dirty buffer causes reconstruction block error > -- > > Key: HDFS-15240 > URL: https://issues.apache.org/jira/browse/HDFS-15240 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, erasure-coding >Reporter: HuangTao >Assignee: HuangTao >Priority: Major > Attachments: HDFS-15240.001.patch, HDFS-15240.002.patch, > HDFS-15240.003.patch, HDFS-15240.004.patch, HDFS-15240.005.patch, > HDFS-15240.006.patch, HDFS-15240.007.patch, HDFS-15240.008.patch, > HDFS-15240.009.patch, HDFS-15240.010.patch, > image-2020-07-16-15-56-38-608.png, > org.apache.hadoop.hdfs.TestReconstructStripedFile-output.txt, > org.apache.hadoop.hdfs.TestReconstructStripedFile.txt, > test-HDFS-15240.006.patch > > > When read some lzo files we found some blocks were broken. > I read back all internal blocks(b0-b8) of the block group(RS-6-3-1024k) from > DN directly, and choose 6(b0-b5) blocks to decode other 3(b6', b7', b8') > blocks. And find the longest common sequenece(LCS) between b6'(decoded) and > b6(read from DN)(b7'/b7 and b8'/b8). > After selecting 6 blocks of the block group in combinations one time and > iterating through all cases, I find one case that the length of LCS is the > block length - 64KB, 64KB is just the length of ByteBuffer used by > StripedBlockReader. So the corrupt reconstruction block is made by a dirty > buffer. > The following log snippet(only show 2 of 28 cases) is my check program > output. In my case, I known the 3th block is corrupt, so need other 5 blocks > to decode another 3 blocks, then find the 1th block's LCS substring is block > length - 64kb. > It means (0,1,2,4,5,6)th blocks were used to reconstruct 3th block, and the > dirty buffer was used before read the 1th block. > Must be noted that StripedBlockReader read from the offset 0 of the 1th block > after used the dirty buffer. > EDITED for readability. > {code:java} > decode from block[0, 2, 3, 4, 5, 7] to generate block[1', 6', 8'] > Check the first 131072 bytes between block[1] and block[1'], the longest > common substring length is 4 > Check the first 131072 bytes between block[6] and block[6'], the longest > common substring length is 4 > Check the first 131072 bytes between block[8] and block[8'], the longest > common substring length is 4 > decode from block[0, 2, 3, 4, 5, 6] to generate block[1', 7', 8'] > Check the first 131072 bytes between block[1] and block[1'], the longest > common substring length is 65536 > CHECK AGAIN: all 27262976 bytes between block[1] and block[1'], the longest > common substring length is 27197440 # this one > Check the first 131072 bytes between block[7] and block[7'], the longest > common substring length is 4 > Check the first 131072 bytes between block[8] and block[8'], the longest > common substring length is 4{code} > Now I know the dirty buffer causes reconstruction block error, but how does > the dirty buffer come about? > After digging into the code and DN log, I found this following DN log is the > root reason. > {code:java} > [INFO] [stripedRead-1017] : Interrupted while waiting for IO on channel > java.nio.channels.SocketChannel[connected local=/:52586 > remote=/:50010]. 18 millis timeout left. > [WARN] [StripedBlockReconstruction-199] : Failed to reconstruct striped > block: BP-714356632--1519726836856:blk_-YY_3472979393 > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.util.StripedBlockUtil.getNextCompletedStripedRead(StripedBlockUtil.java:314) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.doReadMinimumSources(StripedReader.java:308) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.readMinimumSources(StripedReader.java:269) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.reconstruct(StripedBlockReconstructor.java:94) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.run(StripedBlockReconstructor.java:60) > at >
[jira] [Commented] (HDFS-15240) Erasure Coding: dirty buffer causes reconstruction block error
[ https://issues.apache.org/jira/browse/HDFS-15240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239878#comment-17239878 ] Hui Fei commented on HDFS-15240: [~marvelrock] Thanks. It's great, [^HDFS-15240.010.patch] looks better and clear. {quote} + } catch (InterruptedException e) { + } {quote} Maybe we should log exception message, after this +1. > Erasure Coding: dirty buffer causes reconstruction block error > -- > > Key: HDFS-15240 > URL: https://issues.apache.org/jira/browse/HDFS-15240 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, erasure-coding >Reporter: HuangTao >Assignee: HuangTao >Priority: Major > Attachments: HDFS-15240.001.patch, HDFS-15240.002.patch, > HDFS-15240.003.patch, HDFS-15240.004.patch, HDFS-15240.005.patch, > HDFS-15240.006.patch, HDFS-15240.007.patch, HDFS-15240.008.patch, > HDFS-15240.009.patch, HDFS-15240.010.patch, > image-2020-07-16-15-56-38-608.png, > org.apache.hadoop.hdfs.TestReconstructStripedFile-output.txt, > org.apache.hadoop.hdfs.TestReconstructStripedFile.txt, > test-HDFS-15240.006.patch > > > When read some lzo files we found some blocks were broken. > I read back all internal blocks(b0-b8) of the block group(RS-6-3-1024k) from > DN directly, and choose 6(b0-b5) blocks to decode other 3(b6', b7', b8') > blocks. And find the longest common sequenece(LCS) between b6'(decoded) and > b6(read from DN)(b7'/b7 and b8'/b8). > After selecting 6 blocks of the block group in combinations one time and > iterating through all cases, I find one case that the length of LCS is the > block length - 64KB, 64KB is just the length of ByteBuffer used by > StripedBlockReader. So the corrupt reconstruction block is made by a dirty > buffer. > The following log snippet(only show 2 of 28 cases) is my check program > output. In my case, I known the 3th block is corrupt, so need other 5 blocks > to decode another 3 blocks, then find the 1th block's LCS substring is block > length - 64kb. > It means (0,1,2,4,5,6)th blocks were used to reconstruct 3th block, and the > dirty buffer was used before read the 1th block. > Must be noted that StripedBlockReader read from the offset 0 of the 1th block > after used the dirty buffer. > EDITED for readability. > {code:java} > decode from block[0, 2, 3, 4, 5, 7] to generate block[1', 6', 8'] > Check the first 131072 bytes between block[1] and block[1'], the longest > common substring length is 4 > Check the first 131072 bytes between block[6] and block[6'], the longest > common substring length is 4 > Check the first 131072 bytes between block[8] and block[8'], the longest > common substring length is 4 > decode from block[0, 2, 3, 4, 5, 6] to generate block[1', 7', 8'] > Check the first 131072 bytes between block[1] and block[1'], the longest > common substring length is 65536 > CHECK AGAIN: all 27262976 bytes between block[1] and block[1'], the longest > common substring length is 27197440 # this one > Check the first 131072 bytes between block[7] and block[7'], the longest > common substring length is 4 > Check the first 131072 bytes between block[8] and block[8'], the longest > common substring length is 4{code} > Now I know the dirty buffer causes reconstruction block error, but how does > the dirty buffer come about? > After digging into the code and DN log, I found this following DN log is the > root reason. > {code:java} > [INFO] [stripedRead-1017] : Interrupted while waiting for IO on channel > java.nio.channels.SocketChannel[connected local=/:52586 > remote=/:50010]. 18 millis timeout left. > [WARN] [StripedBlockReconstruction-199] : Failed to reconstruct striped > block: BP-714356632--1519726836856:blk_-YY_3472979393 > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.util.StripedBlockUtil.getNextCompletedStripedRead(StripedBlockUtil.java:314) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.doReadMinimumSources(StripedReader.java:308) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.readMinimumSources(StripedReader.java:269) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.reconstruct(StripedBlockReconstructor.java:94) > at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.run(StripedBlockReconstructor.java:60) > at > java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at >
[jira] [Commented] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239876#comment-17239876 ] Hui Fei commented on HDFS-15698: merged into trunk > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei resolved HDFS-15698. Fix Version/s: 3.4.0 Resolution: Fixed > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?focusedWorklogId=517491=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517491 ] ASF GitHub Bot logged work on HDFS-15698: - Author: ASF GitHub Bot Created on: 28/Nov/20 04:20 Start Date: 28/Nov/20 04:20 Worklog Time Spent: 10m Work Description: ferhui merged pull request #2495: URL: https://github.com/apache/hadoop/pull/2495 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 517491) Time Spent: 1h (was: 50m) > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?focusedWorklogId=517490=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517490 ] ASF GitHub Bot logged work on HDFS-15698: - Author: ASF GitHub Bot Created on: 28/Nov/20 04:19 Start Date: 28/Nov/20 04:19 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #2495: URL: https://github.com/apache/hadoop/pull/2495#issuecomment-735037792 @ayushtkn @goiri Thanks for review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 517490) Time Spent: 50m (was: 40m) > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15660) StorageTypeProto is not compatiable between 3.x and 2.6
[ https://issues.apache.org/jira/browse/HDFS-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239856#comment-17239856 ] huangtianhua commented on HDFS-15660: - [~vinayakumarb][~brahma]could you have a look for this? thanks. > StorageTypeProto is not compatiable between 3.x and 2.6 > --- > > Key: HDFS-15660 > URL: https://issues.apache.org/jira/browse/HDFS-15660 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.2.0, 3.1.3 >Reporter: Ryan Wu >Assignee: Ryan Wu >Priority: Major > Attachments: HDFS-15660.001.patch, HDFS-15660.002.patch > > > In our case, when nn has upgraded to 3.1.3 and dn’s version was still 2.6, > we found hive to call getContentSummary method , the client and server was > not compatible because of hadoop3 added new PROVIDED storage type. > {code:java} > // code placeholder > 20/04/15 14:28:35 INFO retry.RetryInvocationHandler---main: Exception while > invoking getContentSummary of class ClientNamenodeProtocolTranslatorPB over > x/x:8020. Trying to fail over immediately. > java.io.IOException: com.google.protobuf.ServiceException: > com.google.protobuf.UninitializedMessageException: Message missing required > fields: summary.typeQuotaInfos.typeQuotaInfo[3].type > at > org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:819) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) > at com.sun.proxy.$Proxy11.getContentSummary(Unknown Source) > at > org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:3144) > at > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:706) > at > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:713) > at org.apache.hadoop.fs.shell.Count.processPath(Count.java:109) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317) > at > org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289) > at > org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271) > at > org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255) > at > org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118) > at org.apache.hadoop.fs.shell.Command.run(Command.java:165) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:315) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:372) > Caused by: com.google.protobuf.ServiceException: > com.google.protobuf.UninitializedMessageException: Message missing required > fields: summary.typeQuotaInfos.typeQuotaInfo[3].type > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:272) > at com.sun.proxy.$Proxy10.getContentSummary(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:816) > ... 23 more > Caused by: com.google.protobuf.UninitializedMessageException: Message missing > required fields: summary.typeQuotaInfos.typeQuotaInfo[3].type > at > com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65392) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65331) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:263) > ... 25 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HDFS-15042) Add more tests for ByteBufferPositionedReadable
[ https://issues.apache.org/jira/browse/HDFS-15042?focusedWorklogId=517447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517447 ] ASF GitHub Bot logged work on HDFS-15042: - Author: ASF GitHub Bot Created on: 27/Nov/20 20:05 Start Date: 27/Nov/20 20:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #1747: URL: https://github.com/apache/hadoop/pull/1747#issuecomment-734970323 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 30m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 32s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 38s | | trunk passed | | +1 :green_heart: | compile | 21m 16s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 18m 0s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 18s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 41s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 44s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 3m 17s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 8m 8s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 53s | | the patch passed | | +1 :green_heart: | compile | 20m 39s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 20m 39s | | the patch passed | | +1 :green_heart: | compile | 18m 2s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 18m 2s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 50s | | root: The patch generated 0 new + 50 unchanged - 5 fixed = 50 total (was 55) | | +1 :green_heart: | mvnsite | 3m 51s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 17m 17s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 41s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 35s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 8m 36s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 45s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 25s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 117m 17s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1747/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 361m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDeadDatanode | | | hadoop.hdfs.TestMultipleNNPortQOP | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1747/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1747 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8e90e0b3f470 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Work logged] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?focusedWorklogId=517408=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517408 ] ASF GitHub Bot logged work on HDFS-15698: - Author: ASF GitHub Bot Created on: 27/Nov/20 16:49 Start Date: 27/Nov/20 16:49 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2495: URL: https://github.com/apache/hadoop/pull/2495#issuecomment-734918223 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 24s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 60m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 33s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 19m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 84m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2495/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2495 | | Optional Tests | dupname asflicense shadedclient | | uname | Linux 5793ee2769ff 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 142941b96e2 | | Max. process+thread count | 617 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2495/2/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 517408) Time Spent: 40m (was: 0.5h) > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?focusedWorklogId=517361=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517361 ] ASF GitHub Bot logged work on HDFS-15698: - Author: ASF GitHub Bot Created on: 27/Nov/20 13:20 Start Date: 27/Nov/20 13:20 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2495: URL: https://github.com/apache/hadoop/pull/2495#issuecomment-734832003 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 44m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 8m 36s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2495/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | shadedclient | 9m 20s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 9s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | -1 :x: | shadedclient | 22m 26s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +0 :ok: | asflicense | 0m 42s | | ASF License check generated no output? | | | | 80m 48s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2495/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2495 | | Optional Tests | dupname asflicense shadedclient | | uname | Linux 5899c7f2479d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 03b4e989712 | | Max. process+thread count | 84 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2495/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 517361) Time Spent: 0.5h (was: 20m) > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15240) Erasure Coding: dirty buffer causes reconstruction block error
[ https://issues.apache.org/jira/browse/HDFS-15240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239657#comment-17239657 ] Hadoop QA commented on HDFS-15240: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 23s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 50s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 53s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 45s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 10s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 48s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 36s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 24m 17s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 8s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 51s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 20s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 25s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 40s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 40s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 33s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 33s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 10s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 43s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 10s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | |
[jira] [Commented] (HDFS-15680) Disable Broken Azure Junits
[ https://issues.apache.org/jira/browse/HDFS-15680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239650#comment-17239650 ] Ayush Saxena commented on HDFS-15680: - Thanx [~hexiaoqiao], I am also in the same boat with no idea about azure, but yes cherry-picking -HADOOP-17325- to branch-3.2 should fix there as well. Can try running any one of them locally as well before pushing. > Disable Broken Azure Junits > --- > > Key: HDFS-15680 > URL: https://issues.apache.org/jira/browse/HDFS-15680 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs/azure >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 20m > Remaining Estimate: 0h > > There are 6 test classes have been failing on Yetus for several months. > They contributed to more than 41 failing tests which makes reviewing Yetus > reports every a pain in the neck. Another point is to save the resources and > avoiding utilization of ports, memory, and CPU. > Over the last month, there was some effort to bring the Yetus back to a > stable state. However, there is no progress in addressing Azure failures. > Generally, I do not like to disable failing tests, but for this specific > case, I do not assume that it makes any sense to have 41 failing tests from > one module for several months. Whenever someone finds that those tests are > useful, then they can re-enable the tests on Yetus *_After_* the test is > fixed. > Following a PR, I have to review that my patch does not cause any failures > (include changing error messages in existing tests). A thorough review takes > a considerable amount of time browsing the nightly builds and Github reports. > So, please consider how much time is being spent to review those stack trace > over the last months. > Finally, this is one of the reasons developers tend to ignore the reports, > because it would take too much time to review; and by default, the errors are > considered irrelevant. > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ayushtkn] > {code:bash} > hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked >hadoop.fs.azure.TestNativeAzureFileSystemMocked >hadoop.fs.azure.TestBlobMetadata >hadoop.fs.azure.TestNativeAzureFileSystemConcurrency >hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck >hadoop.fs.azure.TestNativeAzureFileSystemContractMocked >hadoop.fs.azure.TestWasbFsck >hadoop.fs.azure.TestOutOfBandAzureBlobOperations > {code} > {code:bash} > org.apache.hadoop.fs.azure.TestBlobMetadata.testFolderMetadata > org.apache.hadoop.fs.azure.TestBlobMetadata.testFirstContainerVersionMetadata > org.apache.hadoop.fs.azure.TestBlobMetadata.testPermissionMetadata > org.apache.hadoop.fs.azure.TestBlobMetadata.testOldPermissionMetadata > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testNoTempBlobsVisible > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testLinkBlobs > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatusRootDir > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameDirectoryMoveToExistingDirectory > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatus > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameDirectoryAsExistingDirectory > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameToDirWithSamePrefixAllowed > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testLSRootDir > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testDeleteRecursively > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck.testWasbFsck > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testChineseCharactersFolderRename > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderInFolderListingWithZeroByteRenameMetadata > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderInFolderListing > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testUriEncoding > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testDeepFileCreation > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testListDirectory > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderRenameInProgress > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRenameFolder > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRenameImplicitFolder > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolder > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testStoreDeleteFolder >
[jira] [Work logged] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?focusedWorklogId=517344=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517344 ] ASF GitHub Bot logged work on HDFS-15698: - Author: ASF GitHub Bot Created on: 27/Nov/20 12:00 Start Date: 27/Nov/20 12:00 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #2495: URL: https://github.com/apache/hadoop/pull/2495#issuecomment-734801161 @ayushtkn Could you please take a look? Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 517344) Time Spent: 20m (was: 10m) > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-15698: -- Labels: pull-request-available (was: ) > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
[ https://issues.apache.org/jira/browse/HDFS-15698?focusedWorklogId=517342=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517342 ] ASF GitHub Bot logged work on HDFS-15698: - Author: ASF GitHub Bot Created on: 27/Nov/20 11:58 Start Date: 27/Nov/20 11:58 Worklog Time Spent: 10m Work Description: ferhui opened a new pull request #2495: URL: https://github.com/apache/hadoop/pull/2495 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 517342) Remaining Estimate: 0h Time Spent: 10m > Fix the typo of dfshealth.html after HDFS-15358 > --- > > Key: HDFS-15698 > URL: https://issues.apache.org/jira/browse/HDFS-15698 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Trivial > Time Spent: 10m > Remaining Estimate: 0h > > ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15698) Fix the typo of dfshealth.html after HDFS-15358
Hui Fei created HDFS-15698: -- Summary: Fix the typo of dfshealth.html after HDFS-15358 Key: HDFS-15698 URL: https://issues.apache.org/jira/browse/HDFS-15698 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.4.0 Reporter: Hui Fei Assignee: Hui Fei ">" should be "" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15240) Erasure Coding: dirty buffer causes reconstruction block error
[ https://issues.apache.org/jira/browse/HDFS-15240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239639#comment-17239639 ] Hadoop QA commented on HDFS-15240: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 11s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 25s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 32s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 0s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 44s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 49s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 23m 14s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 11s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 24s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 0s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 11s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 16s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 16s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 6s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 6s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 50s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/318/artifact/out/diff-checkstyle-root.txt{color} | {color:orange} root: The patch generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 3s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient
[jira] [Commented] (HDFS-15680) Disable Broken Azure Junits
[ https://issues.apache.org/jira/browse/HDFS-15680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239599#comment-17239599 ] Xiaoqiao He commented on HDFS-15680: Thanks [~ste...@apache.org], IIUC, if we backport HADOOP-17325 to branch-3.2, it could fix the failed unit tests about azure, right? Sorry I am not familiar with azure. > Disable Broken Azure Junits > --- > > Key: HDFS-15680 > URL: https://issues.apache.org/jira/browse/HDFS-15680 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs/azure >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 20m > Remaining Estimate: 0h > > There are 6 test classes have been failing on Yetus for several months. > They contributed to more than 41 failing tests which makes reviewing Yetus > reports every a pain in the neck. Another point is to save the resources and > avoiding utilization of ports, memory, and CPU. > Over the last month, there was some effort to bring the Yetus back to a > stable state. However, there is no progress in addressing Azure failures. > Generally, I do not like to disable failing tests, but for this specific > case, I do not assume that it makes any sense to have 41 failing tests from > one module for several months. Whenever someone finds that those tests are > useful, then they can re-enable the tests on Yetus *_After_* the test is > fixed. > Following a PR, I have to review that my patch does not cause any failures > (include changing error messages in existing tests). A thorough review takes > a considerable amount of time browsing the nightly builds and Github reports. > So, please consider how much time is being spent to review those stack trace > over the last months. > Finally, this is one of the reasons developers tend to ignore the reports, > because it would take too much time to review; and by default, the errors are > considered irrelevant. > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ayushtkn] > {code:bash} > hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked >hadoop.fs.azure.TestNativeAzureFileSystemMocked >hadoop.fs.azure.TestBlobMetadata >hadoop.fs.azure.TestNativeAzureFileSystemConcurrency >hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck >hadoop.fs.azure.TestNativeAzureFileSystemContractMocked >hadoop.fs.azure.TestWasbFsck >hadoop.fs.azure.TestOutOfBandAzureBlobOperations > {code} > {code:bash} > org.apache.hadoop.fs.azure.TestBlobMetadata.testFolderMetadata > org.apache.hadoop.fs.azure.TestBlobMetadata.testFirstContainerVersionMetadata > org.apache.hadoop.fs.azure.TestBlobMetadata.testPermissionMetadata > org.apache.hadoop.fs.azure.TestBlobMetadata.testOldPermissionMetadata > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testNoTempBlobsVisible > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency.testLinkBlobs > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatusRootDir > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameDirectoryMoveToExistingDirectory > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testListStatus > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameDirectoryAsExistingDirectory > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testRenameToDirWithSamePrefixAllowed > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testLSRootDir > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked.testDeleteRecursively > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck.testWasbFsck > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testChineseCharactersFolderRename > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderInFolderListingWithZeroByteRenameMetadata > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderInFolderListing > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testUriEncoding > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testDeepFileCreation > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testListDirectory > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolderRenameInProgress > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRenameFolder > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRenameImplicitFolder > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testRedoRenameFolder > org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked.testStoreDeleteFolder >
[jira] [Updated] (HDFS-15697) Fast copy support EC for HDFS.
[ https://issues.apache.org/jira/browse/HDFS-15697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huhaiyang updated HDFS-15697: - Description: Enhance FastCopy to support EC file . (was: Enhance FastCopy to support EC file ) > Fast copy support EC for HDFS. > -- > > Key: HDFS-15697 > URL: https://issues.apache.org/jira/browse/HDFS-15697 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: huhaiyang >Assignee: huhaiyang >Priority: Major > > Enhance FastCopy to support EC file . -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15697) Fast copy support EC for HDFS.
[ https://issues.apache.org/jira/browse/HDFS-15697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huhaiyang updated HDFS-15697: - External issue ID: (was: https://issues.apache.org/jira/browse/HDFS-2139) > Fast copy support EC for HDFS. > -- > > Key: HDFS-15697 > URL: https://issues.apache.org/jira/browse/HDFS-15697 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: huhaiyang >Assignee: huhaiyang >Priority: Major > > Enhance FastCopy to support EC file -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15697) Fast copy support EC for HDFS.
[ https://issues.apache.org/jira/browse/HDFS-15697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huhaiyang updated HDFS-15697: - External issue ID: https://issues.apache.org/jira/browse/HDFS-2139 > Fast copy support EC for HDFS. > -- > > Key: HDFS-15697 > URL: https://issues.apache.org/jira/browse/HDFS-15697 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: huhaiyang >Assignee: huhaiyang >Priority: Major > > Enhance FastCopy to support EC file -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15697) Fast copy support EC for HDFS.
[ https://issues.apache.org/jira/browse/HDFS-15697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huhaiyang updated HDFS-15697: - Description: Enhance FastCopy to support EC file > Fast copy support EC for HDFS. > -- > > Key: HDFS-15697 > URL: https://issues.apache.org/jira/browse/HDFS-15697 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: huhaiyang >Assignee: huhaiyang >Priority: Major > > Enhance FastCopy to support EC file -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15697) Fast copy support EC for HDFS.
huhaiyang created HDFS-15697: Summary: Fast copy support EC for HDFS. Key: HDFS-15697 URL: https://issues.apache.org/jira/browse/HDFS-15697 Project: Hadoop HDFS Issue Type: New Feature Reporter: huhaiyang Assignee: huhaiyang -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org