LiJie20190102 opened a new issue, #13480:
URL: https://github.com/apache/dolphinscheduler/issues/13480

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### What happened
   
   14:05:41.342 [main] DEBUG org.apache.hadoop.hdfs.DFSClient - Connecting to 
datanode 192.xx.xx.115:9866
   14:05:41.343 [main] DEBUG 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL 
encryption trust check: localHostTrusted = false, remoteHostTrusted = false
   14:05:41.343 [main] DEBUG 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL 
client skipping handshake in unsecured configuration for addr = /192.xx.xx.115, 
datanodeId = 
DatanodeInfoWithStorage[192.xx.xx.115:9866,DS-8252f2c6-c7ad-4c02-a8c7-896c0367a7b0,DISK]
   14:05:41.346 [main] WARN 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory - I/O error constructing 
remote block reader.
   java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524 received 
exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 
Replica not found for 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524. The block may 
have been removed recently by the balancer or by intentionally reducing the 
replication factor. This condition is usually harmless. To be certain, please 
check the preceding datanode log messages for signs of a more serious issue., 
for OP_READ_BLOCK, self=/192.168.20.184:62328, remote=/192.xx.xx.115:9866, for 
file /tmp/test/apache-dolphinscheduler-1.3.9_pro-1.noarch.rpm, for pool 
BP-2061658534-192.xx.xx.110-1667283130593 block 1078480978_4740524
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:128)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:104)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:451)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:419)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:858)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:754)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:381)
        at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:663)
        at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:594)
        at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:845)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:114)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:506)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:486)
        at 
org.apache.dolphinscheduler.common.utils.HadoopUtils.copyHdfsToLocal(HadoopUtils.java:405)
        at 
org.apache.dolphinscheduler.common.utils.HadoopUtilsTest.testCopyHdfsToLocal(HadoopUtilsTest.java:163)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
        at 
org.mockito.internal.runners.DefaultInternalRunner$1.run(DefaultInternalRunner.java:79)
        at 
org.mockito.internal.runners.DefaultInternalRunner.run(DefaultInternalRunner.java:85)
        at org.mockito.internal.runners.StrictRunner.run(StrictRunner.java:39)
        at org.mockito.junit.MockitoJUnitRunner.run(MockitoJUnitRunner.java:163)
        at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
        at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
        at 
com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
        at 
com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
        at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
        at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
        at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
   14:05:41.346 [main] WARN org.apache.hadoop.hdfs.DFSClient - Failed to 
connect to /192.xx.xx.115:9866 for block 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524, add to 
deadNodes and continue. 
   java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524 received 
exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 
Replica not found for 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524. The block may 
have been removed recently by the balancer or by intentionally reducing the 
replication factor. This condition is usually harmless. To be certain, please 
check the preceding datanode log messages for signs of a more serious issue., 
for OP_READ_BLOCK, self=/192.168.20.184:62328, remote=/192.xx.xx.115:9866, for 
file /tmp/test/apache-dolphinscheduler-1.3.9_pro-1.noarch.rpm, for pool 
BP-2061658534-192.xx.xx.110-1667283130593 block 1078480978_4740524
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:128)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:104)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:451)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:419)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:858)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:754)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:381)
        at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:663)
        at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:594)
        at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:845)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:114)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:506)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:486)
        at 
org.apache.dolphinscheduler.common.utils.HadoopUtils.copyHdfsToLocal(HadoopUtils.java:405)
        at 
org.apache.dolphinscheduler.common.utils.HadoopUtilsTest.testCopyHdfsToLocal(HadoopUtilsTest.java:163)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
        at 
org.mockito.internal.runners.DefaultInternalRunner$1.run(DefaultInternalRunner.java:79)
        at 
org.mockito.internal.runners.DefaultInternalRunner.run(DefaultInternalRunner.java:85)
        at org.mockito.internal.runners.StrictRunner.run(StrictRunner.java:39)
        at org.mockito.junit.MockitoJUnitRunner.run(MockitoJUnitRunner.java:163)
        at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
        at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
        at 
com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
        at 
com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
        at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
        at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
        at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
   14:05:41.346 [main] DEBUG org.apache.hadoop.hdfs.DFSClient - Connecting to 
datanode 192.xx.xx.113:9866
   14:05:41.347 [main] DEBUG 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL 
encryption trust check: localHostTrusted = false, remoteHostTrusted = false
   14:05:41.347 [main] DEBUG 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL 
client skipping handshake in unsecured configuration for addr = /192.xx.xx.113, 
datanodeId = 
DatanodeInfoWithStorage[192.xx.xx.113:9866,DS-a3041412-4f9f-43e4-9dad-20b524306e7c,DISK]
   14:05:41.348 [main] INFO org.apache.hadoop.hdfs.DFSClient - Successfully 
connected to /192.xx.xx.113:9866 for 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524
   14:05:41.826 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (2114289475) connection to 
hadoop01/192.xx.xx.110:8020 from hdfs sending #11 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
   14:05:41.830 [IPC Client (2114289475) connection to 
hadoop01/192.xx.xx.110:8020 from hdfs] DEBUG org.apache.hadoop.ipc.Client - IPC 
Client (2114289475) connection to hadoop01/192.xx.xx.110:8020 from hdfs got 
value #11
   
   
   explain:
   When the program is using or downloading a resource (such as org. apache. 
dolphinscheduler. common. utils. HadoopUtils # copyHdfsToLocal or org. apache. 
dolphinscheduler. common. utils. HadoopUtils # catFile (java. lang. String)). 
At this time, the resource is replaced (such as org. apache. dolphinscheduler. 
common. utils. HadoopUtils # copyLocalToHdfs), and an error will be generated.
   
   ### What you expected to happen
   
   14:05:41.342 [main] DEBUG org.apache.hadoop.hdfs.DFSClient - Connecting to 
datanode 192.xx.xx.115:9866
   14:05:41.343 [main] DEBUG 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL 
encryption trust check: localHostTrusted = false, remoteHostTrusted = false
   14:05:41.343 [main] DEBUG 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL 
client skipping handshake in unsecured configuration for addr = /192.xx.xx.115, 
datanodeId = 
DatanodeInfoWithStorage[192.xx.xx.115:9866,DS-8252f2c6-c7ad-4c02-a8c7-896c0367a7b0,DISK]
   14:05:41.346 [main] WARN 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory - I/O error constructing 
remote block reader.
   java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524 received 
exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 
Replica not found for 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524. The block may 
have been removed recently by the balancer or by intentionally reducing the 
replication factor. This condition is usually harmless. To be certain, please 
check the preceding datanode log messages for signs of a more serious issue., 
for OP_READ_BLOCK, self=/192.168.20.184:62328, remote=/192.xx.xx.115:9866, for 
file /tmp/test/apache-dolphinscheduler-1.3.9_pro-1.noarch.rpm, for pool 
BP-2061658534-192.xx.xx.110-1667283130593 block 1078480978_4740524
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:128)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:104)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:451)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:419)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:858)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:754)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:381)
        at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:663)
        at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:594)
        at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:845)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:114)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:506)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:486)
        at 
org.apache.dolphinscheduler.common.utils.HadoopUtils.copyHdfsToLocal(HadoopUtils.java:405)
        at 
org.apache.dolphinscheduler.common.utils.HadoopUtilsTest.testCopyHdfsToLocal(HadoopUtilsTest.java:163)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
        at 
org.mockito.internal.runners.DefaultInternalRunner$1.run(DefaultInternalRunner.java:79)
        at 
org.mockito.internal.runners.DefaultInternalRunner.run(DefaultInternalRunner.java:85)
        at org.mockito.internal.runners.StrictRunner.run(StrictRunner.java:39)
        at org.mockito.junit.MockitoJUnitRunner.run(MockitoJUnitRunner.java:163)
        at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
        at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
        at 
com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
        at 
com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
        at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
        at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
        at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
   14:05:41.346 [main] WARN org.apache.hadoop.hdfs.DFSClient - Failed to 
connect to /192.xx.xx.115:9866 for block 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524, add to 
deadNodes and continue. 
   java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524 received 
exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: 
Replica not found for 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524. The block may 
have been removed recently by the balancer or by intentionally reducing the 
replication factor. This condition is usually harmless. To be certain, please 
check the preceding datanode log messages for signs of a more serious issue., 
for OP_READ_BLOCK, self=/192.168.20.184:62328, remote=/192.xx.xx.115:9866, for 
file /tmp/test/apache-dolphinscheduler-1.3.9_pro-1.noarch.rpm, for pool 
BP-2061658534-192.xx.xx.110-1667283130593 block 1078480978_4740524
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:128)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:104)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:451)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:419)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:858)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:754)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:381)
        at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:663)
        at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:594)
        at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:845)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:114)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:506)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:486)
        at 
org.apache.dolphinscheduler.common.utils.HadoopUtils.copyHdfsToLocal(HadoopUtils.java:405)
        at 
org.apache.dolphinscheduler.common.utils.HadoopUtilsTest.testCopyHdfsToLocal(HadoopUtilsTest.java:163)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
        at 
org.mockito.internal.runners.DefaultInternalRunner$1.run(DefaultInternalRunner.java:79)
        at 
org.mockito.internal.runners.DefaultInternalRunner.run(DefaultInternalRunner.java:85)
        at org.mockito.internal.runners.StrictRunner.run(StrictRunner.java:39)
        at org.mockito.junit.MockitoJUnitRunner.run(MockitoJUnitRunner.java:163)
        at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
        at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
        at 
com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
        at 
com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
        at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
        at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
        at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
   14:05:41.346 [main] DEBUG org.apache.hadoop.hdfs.DFSClient - Connecting to 
datanode 192.xx.xx.113:9866
   14:05:41.347 [main] DEBUG 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL 
encryption trust check: localHostTrusted = false, remoteHostTrusted = false
   14:05:41.347 [main] DEBUG 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient - SASL 
client skipping handshake in unsecured configuration for addr = /192.xx.xx.113, 
datanodeId = 
DatanodeInfoWithStorage[192.xx.xx.113:9866,DS-a3041412-4f9f-43e4-9dad-20b524306e7c,DISK]
   14:05:41.348 [main] INFO org.apache.hadoop.hdfs.DFSClient - Successfully 
connected to /192.xx.xx.113:9866 for 
BP-2061658534-192.xx.xx.110-1667283130593:blk_1078480978_4740524
   14:05:41.826 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (2114289475) connection to 
hadoop01/192.xx.xx.110:8020 from hdfs sending #11 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
   14:05:41.830 [IPC Client (2114289475) connection to 
hadoop01/192.xx.xx.110:8020 from hdfs] DEBUG org.apache.hadoop.ipc.Client - IPC 
Client (2114289475) connection to hadoop01/192.xx.xx.110:8020 from hdfs got 
value #11
   
   
   explain:
   When the program is using or downloading a resource (such as org. apache. 
dolphinscheduler. common. utils. HadoopUtils # copyHdfsToLocal or org. apache. 
dolphinscheduler. common. utils. HadoopUtils # catFile (java. lang. String)). 
At this time, the resource is replaced (such as org. apache. dolphinscheduler. 
common. utils. HadoopUtils # copyLocalToHdfs), and an error will be generated.
   
   ### How to reproduce
   
   Add the following two methods in 
org.apache.dolphinscheduler.common.utils.HadoopUtilsTest.
   
      
   
    @Test
       public void test() {
           try {
               while (true) {
               
hadoopUtils.copyLocalToHdfs("D:/tmp/113/apache-dolphinscheduler-1.3.9_pro-1.noarch.rpm",
 "/tmp/test", false, true);
               }
           } catch (Exception e) {
               logger.error(e.getMessage(), e);
           }
       }
   
       @Test
       public void testCopyHdfsToLocal() {
           try {
               while (true) {
                   hadoopUtils.copyHdfsToLocal(
                       
"/tmp/test/apache-dolphinscheduler-1.3.9_pro-1.noarch.rpm",
                       "D:\\tmp\\113\\a.zip", false, true);
               }
           } catch (Exception e) {
               logger.error(e.getMessage(), e);
           }
       }
   
   ### Anything else
   
   Only when certain conditions are met.
   
   
   ### Version
   
   3.1.x
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to