Ankur Goenka created BEAM-6057:
----------------------------------

             Summary: Python post commit failing 
:beam-sdks-python:hdfsIntegrationTest
                 Key: BEAM-6057
                 URL: https://issues.apache.org/jira/browse/BEAM-6057
             Project: Beam
          Issue Type: Bug
          Components: build-system
            Reporter: Ankur Goenka
            Assignee: Luke Cwik


:beam-sdks-python:hdfsIntegrationTest is failing on jenkins with following 
error.
{code:java}
datanode_1_1f917c3c0d2e | 18/11/13 00:57:28 INFO datanode.DataNode: 
Block pool BP-1790693572-172.18.0.2-1542070629622 (Datanode Uuid 
06470cf0-ac11-4c97-80fe-d5463ee38b47) service to namenode/172.18.0.2:8020 
beginning handshake with NN namenode_1_78f1ba71281a | 18/11/13 
00:57:38 WARN blockmanagement.DatanodeManager: Unresolved datanode 
registration: hostname cannot be resolved (ip=172.18.0.3, hostname=172.18.0.3) 
namenode_1_78f1ba71281a | 18/11/13 00:57:38 INFO 
namenode.FSNamesystem: FSNamesystem write lock held for 10010 ms via 
namenode_1_78f1ba71281a | 
java.lang.Thread.getStackTrace(Thread.java:1559) namenode_1_78f1ba71281a 
| org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032) 
namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:233)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1537)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3652)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1386)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:101)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28419)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 
namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) 
namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) 
namenode_1_78f1ba71281a | 
java.security.AccessController.doPrivileged(Native Method) 
namenode_1_78f1ba71281a | 
javax.security.auth.Subject.doAs(Subject.java:422) namenode_1_78f1ba71281a 
| 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) 
namenode_1_78f1ba71281a | Number of suppressed write-lock reports: 0 
namenode_1_78f1ba71281a | Longest write-lock held interval: 10010 
namenode_1_78f1ba71281a | 18/11/13 00:57:38 INFO ipc.Server: IPC 
Server handler 2 on 8020, call Call#3 Retry#0 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.registerDatanode from 
172.18.0.3:35480 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode 
denied communication with namenode because hostname cannot be resolved 
(ip=172.18.0.3, hostname=172.18.0.3): DatanodeRegistration(0.0.0.0:50010, 
datanodeUuid=06470cf0-ac11-4c97-80fe-d5463ee38b47, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-4983ef53-0780-42e1-bdd3-d01ccaadf21c;nsid=282386608;c=1542070629622)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:867)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3649)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1386)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:101)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28419)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 
namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) 
namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) 
namenode_1_78f1ba71281a | at 
java.security.AccessController.doPrivileged(Native Method) 
namenode_1_78f1ba71281a | at 
javax.security.auth.Subject.doAs(Subject.java:422) namenode_1_78f1ba71281a 
| at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) 
datanode_1_1f917c3c0d2e | 18/11/13 00:57:38 ERROR datanode.DataNode: 
Initialization failed for Block pool BP-1790693572-172.18.0.2-1542070629622 
(Datanode Uuid 06470cf0-ac11-4c97-80fe-d5463ee38b47) service to 
namenode/172.18.0.2:8020 Datanode denied communication with namenode because 
hostname cannot be resolved (ip=172.18.0.3, hostname=172.18.0.3): 
DatanodeRegistration(0.0.0.0:50010, 
datanodeUuid=06470cf0-ac11-4c97-80fe-d5463ee38b47, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-4983ef53-0780-42e1-bdd3-d01ccaadf21c;nsid=282386608;c=1542070629622)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:867)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3649)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1386)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:101)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28419)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 
datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) 
datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) 
datanode_1_1f917c3c0d2e | at 
java.security.AccessController.doPrivileged(Native Method) 
datanode_1_1f917c3c0d2e | at 
javax.security.auth.Subject.doAs(Subject.java:422) datanode_1_1f917c3c0d2e 
| at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) 
datanode_1_1f917c3c0d2e | datanode_1_1f917c3c0d2e | 18/11/13 
00:57:43 INFO datanode.DataNode: Block pool 
BP-1790693572-172.18.0.2-1542070629622 (Datanode Uuid 
06470cf0-ac11-4c97-80fe-d5463ee38b47) service to namenode/172.18.0.2:8020 
beginning handshake with NN namenode_1_78f1ba71281a | 18/11/13 
00:57:53 WARN blockmanagement.DatanodeManager: Unresolved datanode 
registration: hostname cannot be resolved (ip=172.18.0.3, hostname=172.18.0.3) 
namenode_1_78f1ba71281a | 18/11/13 00:57:53 INFO 
namenode.FSNamesystem: FSNamesystem write lock held for 10012 ms via 
namenode_1_78f1ba71281a | 
java.lang.Thread.getStackTrace(Thread.java:1559) namenode_1_78f1ba71281a 
| org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032) 
namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:233)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1537)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3652)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1386)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:101)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28419)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 
namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) 
namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) 
namenode_1_78f1ba71281a | 
java.security.AccessController.doPrivileged(Native Method) 
namenode_1_78f1ba71281a | 
javax.security.auth.Subject.doAs(Subject.java:422) namenode_1_78f1ba71281a 
| 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
 namenode_1_78f1ba71281a | 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) 
namenode_1_78f1ba71281a | Number of suppressed write-lock reports: 0 
namenode_1_78f1ba71281a | Longest write-lock held interval: 10012 
namenode_1_78f1ba71281a | 18/11/13 00:57:53 INFO ipc.Server: IPC 
Server handler 8 on 8020, call Call#5 Retry#0 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.registerDatanode from 
172.18.0.3:35496 namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode 
denied communication with namenode because hostname cannot be resolved 
(ip=172.18.0.3, hostname=172.18.0.3): DatanodeRegistration(0.0.0.0:50010, 
datanodeUuid=06470cf0-ac11-4c97-80fe-d5463ee38b47, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-4983ef53-0780-42e1-bdd3-d01ccaadf21c;nsid=282386608;c=1542070629622)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:867)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3649)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1386)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:101)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28419)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 
namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) 
namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) 
namenode_1_78f1ba71281a | at 
java.security.AccessController.doPrivileged(Native Method) 
namenode_1_78f1ba71281a | at 
javax.security.auth.Subject.doAs(Subject.java:422) namenode_1_78f1ba71281a 
| at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
 namenode_1_78f1ba71281a | at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) 
datanode_1_1f917c3c0d2e | 18/11/13 00:57:53 ERROR datanode.DataNode: 
Initialization failed for Block pool BP-1790693572-172.18.0.2-1542070629622 
(Datanode Uuid 06470cf0-ac11-4c97-80fe-d5463ee38b47) service to 
namenode/172.18.0.2:8020 Datanode denied communication with namenode because 
hostname cannot be resolved (ip=172.18.0.3, hostname=172.18.0.3): 
DatanodeRegistration(0.0.0.0:50010, 
datanodeUuid=06470cf0-ac11-4c97-80fe-d5463ee38b47, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-4983ef53-0780-42e1-bdd3-d01ccaadf21c;nsid=282386608;c=1542070629622)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:867)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3649)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1386)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:101)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28419)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 
datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) 
datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) 
datanode_1_1f917c3c0d2e | at 
java.security.AccessController.doPrivileged(Native Method) 
datanode_1_1f917c3c0d2e | at 
javax.security.auth.Subject.doAs(Subject.java:422) datanode_1_1f917c3c0d2e 
| at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
 datanode_1_1f917c3c0d2e | at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) 
datanode_1_1f917c3c0d2e | test_1_b589c004a4e9 | INFO 
Instantiated configuration from 
'/app/sdks/python/apache_beam/io/hdfs_integration_test/hdfscli.cfg'. 
test_1_b589c004a4e9 | INFO Instantiated 
<InsecureClient(url='http://namenode:50070')>. test_1_b589c004a4e9 | 
INFO Uploading 'kinglear.txt' to '/'. test_1_b589c004a4e9 | DEBUG 
Resolved path '/' to '/'. test_1_b589c004a4e9 | INFO Listing '/'. 
test_1_b589c004a4e9 | DEBUG Resolved path '/' to '/'. 
test_1_b589c004a4e9 | DEBUG Resolved path '/' to '/'. 
test_1_b589c004a4e9 | DEBUG Starting new HTTP connection (1): 
namenode:50070 namenode_1_78f1ba71281a | Nov 13, 2018 12:57:56 AM 
com.sun.jersey.api.core.PackagesResourceConfig init 
namenode_1_78f1ba71281a | INFO: Scanning for root resource and 
provider classes in the packages: namenode_1_78f1ba71281a | 
org.apache.hadoop.hdfs.server.namenode.web.resources 
namenode_1_78f1ba71281a | org.apache.hadoop.hdfs.web.resources 
namenode_1_78f1ba71281a | Nov 13, 2018 12:57:57 AM 
com.sun.jersey.api.core.ScanningResourceConfig logClasses 
namenode_1_78f1ba71281a | INFO: Root resource classes found: 
namenode_1_78f1ba71281a | class 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods 
namenode_1_78f1ba71281a | Nov 13, 2018 12:57:57 AM 
com.sun.jersey.api.core.ScanningResourceConfig logClasses 
namenode_1_78f1ba71281a | INFO: Provider classes found: 
namenode_1_78f1ba71281a | class 
org.apache.hadoop.hdfs.web.resources.UserProvider namenode_1_78f1ba71281a 
| class org.apache.hadoop.hdfs.web.resources.ExceptionHandler 
namenode_1_78f1ba71281a | Nov 13, 2018 12:57:57 AM 
com.sun.jersey.server.impl.application.WebApplicationImpl _initiate 
namenode_1_78f1ba71281a | INFO: Initiating Jersey application, version 
'Jersey: 1.9 09/02/2011 11:17 AM' namenode_1_78f1ba71281a | Nov 13, 
2018 12:57:58 AM com.sun.jersey.spi.inject.Errors processErrorMessages 
namenode_1_78f1ba71281a | WARNING: The following warnings have been 
detected with resource and/or provider classes: namenode_1_78f1ba71281a 
| WARNING: A sub-resource method, public javax.ws.rs.core.Response 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.putRoot(org.apache.hadoop.security.UserGroupInformation,org.apache.hadoop.hdfs.web.resources.DelegationParam,org.apache.hadoop.hdfs.web.resources.UserParam,org.apache.hadoop.hdfs.web.resources.DoAsParam,org.apache.hadoop.hdfs.web.resources.PutOpParam,org.apache.hadoop.hdfs.web.resources.DestinationParam,org.apache.hadoop.hdfs.web.resources.OwnerParam,org.apache.hadoop.hdfs.web.resources.GroupParam,org.apache.hadoop.hdfs.web.resources.PermissionParam,org.apache.hadoop.hdfs.web.resources.OverwriteParam,org.apache.hadoop.hdfs.web.resources.BufferSizeParam,org.apache.hadoop.hdfs.web.resources.ReplicationParam,org.apache.hadoop.hdfs.web.resources.BlockSizeParam,org.apache.hadoop.hdfs.web.resources.ModificationTimeParam,org.apache.hadoop.hdfs.web.resources.AccessTimeParam,org.apache.hadoop.hdfs.web.resources.RenameOptionSetParam,org.apache.hadoop.hdfs.web.resources.CreateParentParam,org.apache.hadoop.hdfs.web.resources.TokenArgumentParam,org.apache.hadoop.hdfs.web.resources.AclPermissionParam,org.apache.hadoop.hdfs.web.resources.XAttrNameParam,org.apache.hadoop.hdfs.web.resources.XAttrValueParam,org.apache.hadoop.hdfs.web.resources.XAttrSetFlagParam,org.apache.hadoop.hdfs.web.resources.SnapshotNameParam,org.apache.hadoop.hdfs.web.resources.OldSnapshotNameParam,org.apache.hadoop.hdfs.web.resources.ExcludeDatanodesParam,org.apache.hadoop.hdfs.web.resources.CreateFlagParam,org.apache.hadoop.hdfs.web.resources.StoragePolicyParam)
 throws java.io.IOException,java.lang.InterruptedException, with URI template, 
"/", is treated as a resource method namenode_1_78f1ba71281a | 
WARNING: A sub-resource method, public javax.ws.rs.core.Response 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.postRoot(org.apache.hadoop.security.UserGroupInformation,org.apache.hadoop.hdfs.web.resources.DelegationParam,org.apache.hadoop.hdfs.web.resources.UserParam,org.apache.hadoop.hdfs.web.resources.DoAsParam,org.apache.hadoop.hdfs.web.resources.PostOpParam,org.apache.hadoop.hdfs.web.resources.ConcatSourcesParam,org.apache.hadoop.hdfs.web.resources.BufferSizeParam,org.apache.hadoop.hdfs.web.resources.ExcludeDatanodesParam,org.apache.hadoop.hdfs.web.resources.NewLengthParam)
 throws java.io.IOException,java.lang.InterruptedException, with URI template, 
"/", is treated as a resource method namenode_1_78f1ba71281a | 
WARNING: A sub-resource method, public javax.ws.rs.core.Response 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.deleteRoot(org.apache.hadoop.security.UserGroupInformation,org.apache.hadoop.hdfs.web.resources.DelegationParam,org.apache.hadoop.hdfs.web.resources.UserParam,org.apache.hadoop.hdfs.web.resources.DoAsParam,org.apache.hadoop.hdfs.web.resources.DeleteOpParam,org.apache.hadoop.hdfs.web.resources.RecursiveParam,org.apache.hadoop.hdfs.web.resources.SnapshotNameParam)
 throws java.io.IOException,java.lang.InterruptedException, with URI template, 
"/", is treated as a resource method namenode_1_78f1ba71281a | 
WARNING: A sub-resource method, public javax.ws.rs.core.Response 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(org.apache.hadoop.security.UserGroupInformation,org.apache.hadoop.hdfs.web.resources.DelegationParam,org.apache.hadoop.hdfs.web.resources.UserParam,org.apache.hadoop.hdfs.web.resources.DoAsParam,org.apache.hadoop.hdfs.web.resources.GetOpParam,org.apache.hadoop.hdfs.web.resources.OffsetParam,org.apache.hadoop.hdfs.web.resources.LengthParam,org.apache.hadoop.hdfs.web.resources.RenewerParam,org.apache.hadoop.hdfs.web.resources.BufferSizeParam,java.util.List,org.apache.hadoop.hdfs.web.resources.XAttrEncodingParam,org.apache.hadoop.hdfs.web.resources.ExcludeDatanodesParam,org.apache.hadoop.hdfs.web.resources.FsActionParam,org.apache.hadoop.hdfs.web.resources.TokenKindParam,org.apache.hadoop.hdfs.web.resources.TokenServiceParam)
 throws java.io.IOException,java.lang.InterruptedException, with URI template, 
"/", is treated as a resource method test_1_b589c004a4e9 | DEBUG 
http://namenode:50070 "GET /webhdfs/v1/?user.name=root&op=LISTSTATUS HTTP/1.1" 
200 None test_1_b589c004a4e9 | DEBUG Uploading 1 files using 1 
thread(s). test_1_b589c004a4e9 | DEBUG Uploading 'kinglear.txt' to 
'/kinglear.txt'. test_1_b589c004a4e9 | INFO Writing to 
'/kinglear.txt'. test_1_b589c004a4e9 | DEBUG Resolved path 
'/kinglear.txt' to '/kinglear.txt'. test_1_b589c004a4e9 | DEBUG 
http://namenode:50070 "PUT 
/webhdfs/v1/kinglear.txt?user.name=root&overwrite=False&op=CREATE HTTP/1.1" 403 
None test_1_b589c004a4e9 | ERROR Error while uploading. Attempting 
cleanup. test_1_b589c004a4e9 | Traceback (most recent call last): 
test_1_b589c004a4e9 | File 
"/usr/local/lib/python2.7/site-packages/hdfs/client.py", line 594, in upload 
test_1_b589c004a4e9 | _upload(path_tuple) test_1_b589c004a4e9 
| File "/usr/local/lib/python2.7/site-packages/hdfs/client.py", line 524, 
in _upload test_1_b589c004a4e9 | self.write(_temp_path, wrap(reader, 
chunk_size, progress), **kwargs) test_1_b589c004a4e9 | File 
"/usr/local/lib/python2.7/site-packages/hdfs/client.py", line 456, in write 
test_1_b589c004a4e9 | buffersize=buffersize, test_1_b589c004a4e9 
| File "/usr/local/lib/python2.7/site-packages/hdfs/client.py", line 112, 
in api_handler test_1_b589c004a4e9 | raise err 
test_1_b589c004a4e9 | HdfsError: Failed to find datanode, suggest to 
check cluster health. excludeDatanodes=null test_1_b589c004a4e9 | INFO 
Deleting '/kinglear.txt' recursively. test_1_b589c004a4e9 | DEBUG 
Resolved path '/kinglear.txt' to '/kinglear.txt'. namenode_1_78f1ba71281a 
| 18/11/13 00:57:58 INFO namenode.EditLogFileOutputStream: Nothing to flush 
test_1_b589c004a4e9 | DEBUG http://namenode:50070 "DELETE 
/webhdfs/v1/kinglear.txt?user.name=root&recursive=True&op=DELETE HTTP/1.1" 200 
None test_1_b589c004a4e9 | ERROR Failed to find datanode, suggest to 
check cluster health. excludeDatanodes=null datanode_1_1f917c3c0d2e | 
18/11/13 00:57:58 INFO datanode.DataNode: Block pool 
BP-1790693572-172.18.0.2-1542070629622 (Datanode Uuid 
06470cf0-ac11-4c97-80fe-d5463ee38b47) service to namenode/172.18.0.2:8020 
beginning handshake with NN 
hdfs_it-jenkins-beam_postcommit_python_verify-6538_test_1_b589c004a4e9 
exited with code 1 Stopping 
hdfs_it-jenkins-beam_postcommit_python_verify-6538_datanode_1_1f917c3c0d2e ... 
Stopping 
hdfs_it-jenkins-beam_postcommit_python_verify-6538_namenode_1_78f1ba71281a ... 
 Stopping 
hdfs_it-jenkins-beam_postcommit_python_verify-6538_datanode_1_1f917c3c0d2e ... 
done  Stopping 
hdfs_it-jenkins-beam_postcommit_python_verify-6538_namenode_1_78f1ba71281a ... 
done Aborting on container exit... > Task 
:beam-sdks-python:hdfsIntegrationTest FAILED
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to