Xiao Chen created HDFS-12378: -------------------------------- Summary: TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails on trunk Key: HDFS-12378 URL: https://issues.apache.org/jira/browse/HDFS-12378 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Xiao Chen
Saw on https://builds.apache.org/job/PreCommit-HDFS-Build/20928/testReport/org.apache.hadoop.hdfs/TestClientProtocolForPipelineRecovery/testZeroByteBlockRecovery/: Error Message {noformat} Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]). The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. {noformat} Stacktrace {noformat} java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]). The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1322) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1388) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1587) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1488) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684) {noformat} Standard Output {noformat} 2017-08-30 18:02:37,714 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(469)) - starting cluster: numNameNodes=1, numDataNodes=3 Formatting using clusterid: testClusterID 2017-08-30 18:02:37,716 [main] INFO namenode.FSEditLog (FSEditLog.java:newInstance(224)) - Edit logging is async:false 2017-08-30 18:02:37,716 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(742)) - KeyProvider: null 2017-08-30 18:02:37,716 [main] INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(120)) - fsLock is fair: true 2017-08-30 18:02:37,716 [main] INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(136)) - Detailed lock hold time metrics enabled: false 2017-08-30 18:02:37,717 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(763)) - fsOwner = jenkins (auth:SIMPLE) 2017-08-30 18:02:37,717 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(764)) - supergroup = supergroup 2017-08-30 18:02:37,717 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(765)) - isPermissionEnabled = true 2017-08-30 18:02:37,717 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(776)) - HA Enabled: false 2017-08-30 18:02:37,718 [main] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-08-30 18:02:37,718 [main] INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(301)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2017-08-30 18:02:37,718 [main] INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(309)) - dfs.namenode.datanode.registration.ip-hostname-check=true 2017-08-30 18:02:37,719 [main] INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2017-08-30 18:02:37,719 [main] INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(82)) - The block deletion will start around 2017 Aug 30 18:02:37 2017-08-30 18:02:37,719 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap 2017-08-30 18:02:37,719 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-08-30 18:02:37,720 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB 2017-08-30 18:02:37,720 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^22 = 4194304 entries 2017-08-30 18:02:37,726 [main] INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(560)) - dfs.block.access.token.enable = false 2017-08-30 18:02:37,726 [main] INFO Configuration.deprecation (Configuration.java:logDeprecation(1285)) - No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2017-08-30 18:02:37,727 [main] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2017-08-30 18:02:37,727 [main] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(162)) - dfs.namenode.safemode.min.datanodes = 0 2017-08-30 18:02:37,727 [main] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(164)) - dfs.namenode.safemode.extension = 0 2017-08-30 18:02:37,727 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(546)) - defaultReplication = 3 2017-08-30 18:02:37,727 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(547)) - maxReplication = 512 2017-08-30 18:02:37,728 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(548)) - minReplication = 1 2017-08-30 18:02:37,728 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(549)) - maxReplicationStreams = 2 2017-08-30 18:02:37,728 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(550)) - redundancyRecheckInterval = 3000ms 2017-08-30 18:02:37,728 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(551)) - encryptDataTransfer = false 2017-08-30 18:02:37,728 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(552)) - maxNumBlocksToLog = 1000 2017-08-30 18:02:37,729 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap 2017-08-30 18:02:37,729 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-08-30 18:02:37,729 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB 2017-08-30 18:02:37,730 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152 entries 2017-08-30 18:02:37,734 [main] INFO namenode.FSDirectory (FSDirectory.java:<init>(278)) - ACLs enabled? false 2017-08-30 18:02:37,734 [main] INFO namenode.FSDirectory (FSDirectory.java:<init>(282)) - POSIX ACL inheritance enabled? true 2017-08-30 18:02:37,734 [main] INFO namenode.FSDirectory (FSDirectory.java:<init>(286)) - XAttrs enabled? true 2017-08-30 18:02:37,734 [main] INFO namenode.NameNode (FSDirectory.java:<init>(350)) - Caching file names occurring more than 10 times 2017-08-30 18:02:37,734 [main] INFO snapshot.SnapshotManager (SnapshotManager.java:<init>(105)) - Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false 2017-08-30 18:02:37,734 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks 2017-08-30 18:02:37,734 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-08-30 18:02:37,735 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB 2017-08-30 18:02:37,735 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^19 = 524288 entries 2017-08-30 18:02:37,736 [main] INFO metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10 2017-08-30 18:02:37,736 [main] INFO metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10 2017-08-30 18:02:37,736 [main] INFO metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2017-08-30 18:02:37,736 [main] INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(968)) - Retry cache on namenode is enabled 2017-08-30 18:02:37,737 [main] INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(976)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2017-08-30 18:02:37,737 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache 2017-08-30 18:02:37,737 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-08-30 18:02:37,737 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB 2017-08-30 18:02:37,737 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^16 = 65536 entries 2017-08-30 18:02:37,738 [main] INFO namenode.FSImage (FSImage.java:format(164)) - Allocated new BlockPoolId: BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:37,740 [main] INFO common.Storage (NNStorage.java:format(583)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1 has been successfully formatted. 2017-08-30 18:02:37,741 [main] INFO common.Storage (NNStorage.java:format(583)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2 has been successfully formatted. 2017-08-30 18:02:37,742 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(429)) - Saving image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression 2017-08-30 18:02:37,742 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(429)) - Saving image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression 2017-08-30 18:02:37,748 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(432)) - Image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 335 bytes saved in 0 seconds. 2017-08-30 18:02:37,748 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(432)) - Image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 335 bytes saved in 0 seconds. 2017-08-30 18:02:37,750 [main] INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to retain 1 images with txid >= 0 2017-08-30 18:02:37,751 [main] INFO namenode.NameNode (NameNode.java:createNameNode(1577)) - createNameNode [] 2017-08-30 18:02:37,753 [main] INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(121)) - loaded properties from hadoop-metrics2-namenode.properties 2017-08-30 18:02:37,754 [main] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 1 second(s). 2017-08-30 18:02:37,754 [main] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - NameNode metrics system started 2017-08-30 18:02:37,755 [main] INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(464)) - fs.defaultFS is hdfs://127.0.0.1:0 2017-08-30 18:02:37,757 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@65f2f9b0] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor 2017-08-30 18:02:37,758 [main] INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1512)) - Starting Web-server for hdfs at: http://localhost:0 2017-08-30 18:02:37,759 [main] INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2017-08-30 18:02:37,760 [main] INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.namenode is not defined 2017-08-30 18:02:37,760 [main] INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(921)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2017-08-30 18:02:37,761 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(894)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2017-08-30 18:02:37,761 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(904)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2017-08-30 18:02:37,761 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(904)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2017-08-30 18:02:37,762 [main] INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(98)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 2017-08-30 18:02:37,762 [main] INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(779)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2017-08-30 18:02:37,762 [main] INFO http.HttpServer2 (HttpServer2.java:bindListener(1121)) - Jetty bound to port 35619 2017-08-30 18:02:37,763 [main] INFO server.Server (Server.java:doStart(346)) - jetty-9.3.19.v20170502 2017-08-30 18:02:37,764 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@46c3a14d{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE} 2017-08-30 18:02:37,765 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@5e8cda75{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,AVAILABLE} 2017-08-30 18:02:37,771 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@5633dafd{/,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/hdfs/,AVAILABLE}{/hdfs} 2017-08-30 18:02:37,772 [main] INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@5d5160e6{HTTP/1.1,[http/1.1]}{localhost:35619} 2017-08-30 18:02:37,772 [main] INFO server.Server (Server.java:doStart(414)) - Started @37288ms 2017-08-30 18:02:37,775 [main] INFO namenode.FSEditLog (FSEditLog.java:newInstance(224)) - Edit logging is async:false 2017-08-30 18:02:37,775 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(742)) - KeyProvider: null 2017-08-30 18:02:37,775 [main] INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(120)) - fsLock is fair: true 2017-08-30 18:02:37,775 [main] INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(136)) - Detailed lock hold time metrics enabled: false 2017-08-30 18:02:37,776 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(763)) - fsOwner = jenkins (auth:SIMPLE) 2017-08-30 18:02:37,776 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(764)) - supergroup = supergroup 2017-08-30 18:02:37,776 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(765)) - isPermissionEnabled = true 2017-08-30 18:02:37,776 [main] INFO namenode.FSNamesystem (FSNamesystem.java:<init>(776)) - HA Enabled: false 2017-08-30 18:02:37,777 [main] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-08-30 18:02:37,777 [main] INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(301)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2017-08-30 18:02:37,777 [main] INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(309)) - dfs.namenode.datanode.registration.ip-hostname-check=true 2017-08-30 18:02:37,777 [main] INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2017-08-30 18:02:37,778 [main] INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(82)) - The block deletion will start around 2017 Aug 30 18:02:37 2017-08-30 18:02:37,778 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap 2017-08-30 18:02:37,778 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-08-30 18:02:37,778 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB 2017-08-30 18:02:37,779 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^22 = 4194304 entries 2017-08-30 18:02:37,784 [main] INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(560)) - dfs.block.access.token.enable = false 2017-08-30 18:02:37,785 [main] INFO Configuration.deprecation (Configuration.java:logDeprecation(1285)) - No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2017-08-30 18:02:37,785 [main] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2017-08-30 18:02:37,785 [main] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(162)) - dfs.namenode.safemode.min.datanodes = 0 2017-08-30 18:02:37,785 [main] INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(164)) - dfs.namenode.safemode.extension = 0 2017-08-30 18:02:37,785 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(546)) - defaultReplication = 3 2017-08-30 18:02:37,785 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(547)) - maxReplication = 512 2017-08-30 18:02:37,786 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(548)) - minReplication = 1 2017-08-30 18:02:37,786 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(549)) - maxReplicationStreams = 2 2017-08-30 18:02:37,786 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(550)) - redundancyRecheckInterval = 3000ms 2017-08-30 18:02:37,786 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(551)) - encryptDataTransfer = false 2017-08-30 18:02:37,786 [main] INFO blockmanagement.BlockManager (BlockManager.java:<init>(552)) - maxNumBlocksToLog = 1000 2017-08-30 18:02:37,786 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap 2017-08-30 18:02:37,787 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-08-30 18:02:37,787 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB 2017-08-30 18:02:37,787 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152 entries 2017-08-30 18:02:37,790 [main] INFO namenode.FSDirectory (FSDirectory.java:<init>(278)) - ACLs enabled? false 2017-08-30 18:02:37,791 [main] INFO namenode.FSDirectory (FSDirectory.java:<init>(282)) - POSIX ACL inheritance enabled? true 2017-08-30 18:02:37,791 [main] INFO namenode.FSDirectory (FSDirectory.java:<init>(286)) - XAttrs enabled? true 2017-08-30 18:02:37,791 [main] INFO namenode.NameNode (FSDirectory.java:<init>(350)) - Caching file names occurring more than 10 times 2017-08-30 18:02:37,791 [main] INFO snapshot.SnapshotManager (SnapshotManager.java:<init>(105)) - Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false 2017-08-30 18:02:37,791 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks 2017-08-30 18:02:37,791 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-08-30 18:02:37,792 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB 2017-08-30 18:02:37,792 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^19 = 524288 entries 2017-08-30 18:02:37,793 [main] INFO metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10 2017-08-30 18:02:37,793 [main] INFO metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10 2017-08-30 18:02:37,793 [main] INFO metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2017-08-30 18:02:37,793 [main] INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(968)) - Retry cache on namenode is enabled 2017-08-30 18:02:37,793 [main] INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(976)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2017-08-30 18:02:37,793 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache 2017-08-30 18:02:37,794 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit 2017-08-30 18:02:37,794 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB 2017-08-30 18:02:37,794 [main] INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^16 = 65536 entries 2017-08-30 18:02:37,831 [main] INFO common.Storage (Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1/in_use.lock acquired by nodename 26462@0c74973bd755 2017-08-30 18:02:37,833 [main] INFO common.Storage (Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2/in_use.lock acquired by nodename 26462@0c74973bd755 2017-08-30 18:02:37,834 [main] INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(396)) - Recovering unfinalized segments in /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1/current 2017-08-30 18:02:37,834 [main] INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(396)) - Recovering unfinalized segments in /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2/current 2017-08-30 18:02:37,834 [main] INFO namenode.FSImage (FSImage.java:loadFSImage(712)) - No edit log streams selected. 2017-08-30 18:02:37,835 [main] INFO namenode.FSImage (FSImage.java:loadFSImageFile(775)) - Planning to load image: FSImageFile(file=/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000) 2017-08-30 18:02:37,835 [main] INFO namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(266)) - Loading 1 INodes. 2017-08-30 18:02:37,836 [main] INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(185)) - Loaded FSImage in 0 seconds. 2017-08-30 18:02:37,836 [main] INFO namenode.FSImage (FSImage.java:loadFSImage(941)) - Loaded image for txid 0 from /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1/current/fsimage_0000000000000000000 2017-08-30 18:02:37,836 [main] INFO namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1080)) - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false) 2017-08-30 18:02:37,836 [main] INFO namenode.FSEditLog (FSEditLog.java:startLogSegment(1324)) - Starting log segment at 1 2017-08-30 18:02:37,841 [main] INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups 2017-08-30 18:02:37,841 [main] INFO namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(714)) - Finished loading FSImage in 46 msecs 2017-08-30 18:02:37,841 [main] INFO namenode.NameNode (NameNodeRpcServer.java:<init>(439)) - RPC server is binding to localhost:0 2017-08-30 18:02:37,842 [main] INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2017-08-30 18:02:37,842 [Socket Reader #1 for port 47551] INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for port 47551 2017-08-30 18:02:37,847 [main] INFO namenode.NameNode (NameNode.java:initialize(737)) - Clients are to use localhost:47551 to access this namenode/service. 2017-08-30 18:02:37,847 [main] INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4811)) - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans. 2017-08-30 18:02:37,858 [main] INFO namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(158)) - Number of blocks under construction: 0 2017-08-30 18:02:37,859 [main] INFO blockmanagement.BlockManager (BlockManager.java:initializeReplQueues(4650)) - initializing replication queues 2017-08-30 18:02:37,860 [main] INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(396)) - STATE* Leaving safe mode after 0 secs 2017-08-30 18:02:37,860 [main] INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(402)) - STATE* Network topology has 0 racks and 0 datanodes 2017-08-30 18:02:37,860 [main] INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(404)) - STATE* UnderReplicatedBlocks has 0 blocks 2017-08-30 18:02:37,866 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3355)) - Total number of blocks = 0 2017-08-30 18:02:37,867 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3356)) - Number of invalid blocks = 0 2017-08-30 18:02:37,867 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3357)) - Number of under-replicated blocks = 0 2017-08-30 18:02:37,867 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3358)) - Number of over-replicated blocks = 0 2017-08-30 18:02:37,867 [Reconstruction Queue Initializer] INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3360)) - Number of blocks being written = 0 2017-08-30 18:02:37,867 [Reconstruction Queue Initializer] INFO hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3363)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 8 msec 2017-08-30 18:02:37,869 [IPC Server Responder] INFO ipc.Server (Server.java:run(1307)) - IPC Server Responder: starting 2017-08-30 18:02:37,869 [IPC Server listener on 47551] INFO ipc.Server (Server.java:run(1146)) - IPC Server listener on 47551: starting 2017-08-30 18:02:37,872 [main] INFO namenode.NameNode (NameNode.java:startCommonServices(830)) - NameNode RPC up at: localhost/127.0.0.1:47551 2017-08-30 18:02:37,873 [main] INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1192)) - Starting services required for active state 2017-08-30 18:02:37,873 [main] INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(706)) - Initializing quota with 4 thread(s) 2017-08-30 18:02:37,873 [main] INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(715)) - Quota initialization completed in 0 milliseconds name space=1 storage space=0 storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0 2017-08-30 18:02:37,875 [CacheReplicationMonitor(1864849826)] INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor with interval 30000 milliseconds 2017-08-30 18:02:37,879 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1591)) - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2 2017-08-30 18:02:37,879 [main] INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1 2017-08-30 18:02:37,880 [main] INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2 2017-08-30 18:02:37,887 [main] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again) 2017-08-30 18:02:37,887 [main] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-08-30 18:02:37,887 [main] INFO datanode.BlockScanner (BlockScanner.java:<init>(184)) - Initialized block scanner with targetBytesPerSec 1048576 2017-08-30 18:02:37,888 [main] INFO datanode.DataNode (DataNode.java:<init>(496)) - Configured hostname is 127.0.0.1 2017-08-30 18:02:37,888 [main] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-08-30 18:02:37,888 [main] INFO datanode.DataNode (DataNode.java:startDataNode(1385)) - Starting DataNode with maxLockedMemory = 0 2017-08-30 18:02:37,888 [main] INFO datanode.DataNode (DataNode.java:initDataXceiver(1142)) - Opened streaming server at /127.0.0.1:46794 2017-08-30 18:02:37,889 [main] INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 10485760 bytes/s 2017-08-30 18:02:37,889 [main] INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50 2017-08-30 18:02:37,892 [main] INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2017-08-30 18:02:37,892 [main] INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.datanode is not defined 2017-08-30 18:02:37,893 [main] INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(921)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2017-08-30 18:02:37,893 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(894)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2017-08-30 18:02:37,894 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(904)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2017-08-30 18:02:37,894 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(904)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2017-08-30 18:02:37,894 [main] INFO http.HttpServer2 (HttpServer2.java:bindListener(1121)) - Jetty bound to port 46189 2017-08-30 18:02:37,895 [main] INFO server.Server (Server.java:doStart(346)) - jetty-9.3.19.v20170502 2017-08-30 18:02:37,897 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@4e1ce44{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE} 2017-08-30 18:02:37,898 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@7a7cc52c{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,AVAILABLE} 2017-08-30 18:02:37,903 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@29eda4f8{/,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/datanode/,AVAILABLE}{/datanode} 2017-08-30 18:02:37,904 [main] INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@1b9776f5{HTTP/1.1,[http/1.1]}{localhost:46189} 2017-08-30 18:02:37,905 [main] INFO server.Server (Server.java:doStart(414)) - Started @37420ms 2017-08-30 18:02:37,911 [main] INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(239)) - Listening HTTP traffic on /127.0.0.1:52625 2017-08-30 18:02:37,911 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@79d9214d] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor 2017-08-30 18:02:37,911 [main] INFO datanode.DataNode (DataNode.java:startDataNode(1412)) - dnUserName = jenkins 2017-08-30 18:02:37,911 [main] INFO datanode.DataNode (DataNode.java:startDataNode(1413)) - supergroup = supergroup 2017-08-30 18:02:37,912 [main] INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2017-08-30 18:02:37,912 [Socket Reader #1 for port 58248] INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for port 58248 2017-08-30 18:02:37,915 [main] INFO datanode.DataNode (DataNode.java:initIpcServer(1029)) - Opened IPC server at /127.0.0.1:58248 2017-08-30 18:02:37,916 [main] INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null 2017-08-30 18:02:37,917 [main] INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(198)) - Starting BPOfferServices for nameservices: <default> 2017-08-30 18:02:37,917 [Thread-997] INFO datanode.DataNode (BPServiceActor.java:run(809)) - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:47551 starting to offer service 2017-08-30 18:02:37,918 [IPC Server Responder] INFO ipc.Server (Server.java:run(1307)) - IPC Server Responder: starting 2017-08-30 18:02:37,919 [IPC Server listener on 58248] INFO ipc.Server (Server.java:run(1146)) - IPC Server listener on 58248: starting 2017-08-30 18:02:37,922 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1591)) - Starting DataNode 1 with dfs.datanode.data.dir: [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data3,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data4 2017-08-30 18:02:37,923 [Thread-997] INFO datanode.DataNode (BPOfferService.java:verifyAndSetNamespaceInfo(364)) - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:47551 2017-08-30 18:02:37,924 [Thread-997] INFO common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(346)) - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2) 2017-08-30 18:02:37,925 [main] INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data3 2017-08-30 18:02:37,925 [main] INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data4 2017-08-30 18:02:37,925 [Thread-997] INFO common.Storage (Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/in_use.lock acquired by nodename 26462@0c74973bd755 2017-08-30 18:02:37,925 [Thread-997] INFO common.Storage (DataStorage.java:loadStorageDirectory(273)) - Storage directory with location [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1 is not formatted for namespace 785578744. Formatting... 2017-08-30 18:02:37,925 [Thread-997] INFO common.Storage (DataStorage.java:createStorageID(150)) - Generated new storageID DS-c7f21b9d-2106-4d12-bb20-c414e1397d77 for directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1 2017-08-30 18:02:37,927 [Thread-997] INFO common.Storage (Storage.java:tryLock(847)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2/in_use.lock acquired by nodename 26462@0c74973bd755 2017-08-30 18:02:37,927 [Thread-997] INFO common.Storage (DataStorage.java:loadStorageDirectory(273)) - Storage directory with location [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2 is not formatted for namespace 785578744. Formatting... 2017-08-30 18:02:37,928 [Thread-997] INFO common.Storage (DataStorage.java:createStorageID(150)) - Generated new storageID DS-230ffa8a-d07d-4ac0-86e2-9acc7403dc9d for directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2 2017-08-30 18:02:37,934 [main] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again) 2017-08-30 18:02:37,934 [main] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-08-30 18:02:37,935 [main] INFO datanode.BlockScanner (BlockScanner.java:<init>(184)) - Initialized block scanner with targetBytesPerSec 1048576 2017-08-30 18:02:37,935 [main] INFO datanode.DataNode (DataNode.java:<init>(496)) - Configured hostname is 127.0.0.1 2017-08-30 18:02:37,935 [main] INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2017-08-30 18:02:37,936 [main] INFO datanode.DataNode (DataNode.java:startDataNode(1385)) - Starting DataNode with maxLockedMemory = 0 2017-08-30 18:02:37,936 [main] INFO datanode.DataNode (DataNode.java:initDataXceiver(1142)) - Opened streaming server at /127.0.0.1:51925 2017-08-30 18:02:37,936 [main] INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 10485760 bytes/s 2017-08-30 18:02:37,937 [main] INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50 2017-08-30 18:02:37,939 [main] INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2017-08-30 18:02:37,939 [main] INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.datanode is not defined 2017-08-30 18:02:37,940 [main] INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(921)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2017-08-30 18:02:37,941 [Thread-997] INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(252)) - Analyzing storage directories for bpid BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:37,941 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(894)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2017-08-30 18:02:37,941 [Thread-997] INFO common.Storage (Storage.java:lock(806)) - Locking is disabled for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current/BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:37,941 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(904)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2017-08-30 18:02:37,941 [Thread-997] INFO common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(168)) - Block pool storage directory for location [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1 and block pool id BP-1775344991-172.17.0.2-1504116157738 is not formatted for BP-1775344991-172.17.0.2-1504116157738. Formatting ... 2017-08-30 18:02:37,941 [Thread-997] INFO common.Storage (BlockPoolSliceStorage.java:format(280)) - Formatting block pool BP-1775344991-172.17.0.2-1504116157738 directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current/BP-1775344991-172.17.0.2-1504116157738/current 2017-08-30 18:02:37,941 [main] INFO http.HttpServer2 (HttpServer2.java:addFilter(904)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2017-08-30 18:02:37,942 [main] INFO http.HttpServer2 (HttpServer2.java:bindListener(1121)) - Jetty bound to port 55765 2017-08-30 18:02:37,942 [main] INFO server.Server (Server.java:doStart(346)) - jetty-9.3.19.v20170502 2017-08-30 18:02:37,945 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@6631cb64{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE} 2017-08-30 18:02:37,945 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@481e91b6{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,AVAILABLE} 2017-08-30 18:02:37,951 [Thread-997] INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(252)) - Analyzing storage directories for bpid BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:37,952 [Thread-997] INFO common.Storage (Storage.java:lock(806)) - Locking is disabled for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2/current/BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:37,952 [Thread-997] INFO common.Storage (BlockPoolSliceStorage.java:loadStorageDirectory(168)) - Block pool storage directory for location [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2 and block pool id BP-1775344991-172.17.0.2-1504116157738 is not formatted for BP-1775344991-172.17.0.2-1504116157738. Formatting ... 2017-08-30 18:02:37,952 [Thread-997] INFO common.Storage (BlockPoolSliceStorage.java:format(280)) - Formatting block pool BP-1775344991-172.17.0.2-1504116157738 directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2/current/BP-1775344991-172.17.0.2-1504116157738/current 2017-08-30 18:02:37,953 [main] INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@42b28ff1{/,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/datanode/,AVAILABLE}{/datanode} 2017-08-30 18:02:37,954 [Thread-997] INFO datanode.DataNode (DataNode.java:initStorage(1693)) - Setting up storage: nsid=785578744;bpid=BP-1775344991-172.17.0.2-1504116157738;lv=-57;nsInfo=lv=-64;cid=testClusterID;nsid=785578744;c=1504116157738;bpid=BP-1775344991-172.17.0.2-1504116157738;dnuuid=null 2017-08-30 18:02:37,954 [main] INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@36061cf3{HTTP/1.1,[http/1.1]}{localhost:55765} 2017-08-30 18:02:37,954 [main] INFO server.Server (Server.java:doStart(414)) - Started @37470ms 2017-08-30 18:02:37,955 [Thread-997] INFO datanode.DataNode (DataNode.java:checkDatanodeUuid(1517)) - Generated and persisted new Datanode UUID d7ee1847-d046-47ea-9a05-7fd25bcb1e86 2017-08-30 18:02:37,959 [Thread-997] INFO impl.FsDatasetImpl (FsVolumeList.java:addVolume(306)) - Added new volume: DS-c7f21b9d-2106-4d12-bb20-c414e1397d77 2017-08-30 18:02:37,962 [Thread-997] INFO impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(435)) - Added volume - [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1, StorageType: DISK 2017-08-30 18:02:37,963 [Thread-997] INFO impl.FsDatasetImpl (FsVolumeList.java:addVolume(306)) - Added new volume: DS-230ffa8a-d07d-4ac0-86e2-9acc7403dc9d 2017-08-30 18:02:37,964 [Thread-997] INFO impl.FsDatasetImpl (FsDatasetImpl.java:addVolume(435)) - Added volume - [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2, StorageType: DISK 2017-08-30 18:02:37,964 [Thread-997] INFO impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(2178)) - Registered FSDatasetState MBean 2017-08-30 18:02:37,965 [Thread-997] INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1 2017-08-30 18:02:37,967 [Thread-997] INFO checker.DatasetVolumeChecker (DatasetVolumeChecker.java:checkAllVolumes(210)) - Scheduled health check for volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1 2017-08-30 18:02:37,967 [main] INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(239)) - Listening HTTP traffic on /127.0.0.1:42070 2017-08-30 18:02:37,967 [Thread-997] INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2 2017-08-30 18:02:37,967 [Thread-997] INFO checker.DatasetVolumeChecker (DatasetVolumeChecker.java:checkAllVolumes(210)) - Scheduled health check for volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2 2017-08-30 18:02:37,968 [main] INFO datanode.DataNode (DataNode.java:startDataNode(1412)) - dnUserName = jenkins 2017-08-30 18:02:37,968 [main] INFO datanode.DataNode (DataNode.java:startDataNode(1413)) - supergroup = supergroup 2017-08-30 18:02:37,968 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@76889e60] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor 2017-08-30 18:02:37,968 [Thread-997] INFO impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(2664)) - Adding block pool BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:37,969 [main] INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2017-08-30 18:02:37,969 [Thread-1025] INFO impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Scanning block pool BP-1775344991-172.17.0.2-1504116157738 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1... 2017-08-30 18:02:37,969 [Thread-1026] INFO impl.FsDatasetImpl (FsVolumeList.java:run(409)) - Scanning block pool BP-1775344991-172.17.0.2-1504116157738 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2... 2017-08-30 18:02:37,969 [Socket Reader #1 for port 50140] INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for port 50140 2017-08-30 18:02:37,975 [Thread-1026] INFO impl.FsDatasetImpl (FsVolumeList.java:run(414)) - Time taken to scan block pool BP-1775344991-172.17.0.2-1504116157738 on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2: 7ms 2017-08-30 18:02:37,977 [main] INFO datanode.DataNode (DataNode.java:initIpcServer(1029)) - Opened IPC server at /127.0.0.1:50140 2017-08-30 18:02:37,979 [Thread-1025] INFO impl.FsDatasetImpl (FsVolumeList.java:run(414)) - Time taken to scan block pool BP-1775344991-172.17.0.2-1504116157738 on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1: 10ms 2017-08-30 18:02:37,979 [Thread-997] INFO impl.FsDatasetImpl (FsVolumeList.java:addBlockPool(440)) - Total time to scan all replicas for block pool BP-1775344991-172.17.0.2-1504116157738: 10ms 2017-08-30 18:02:37,979 [Thread-1032] INFO impl.FsDatasetImpl (FsVolumeList.java:run(198)) - Adding replicas to map for block pool BP-1775344991-172.17.0.2-1504116157738 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1... 2017-08-30 18:02:37,979 [Thread-1033] INFO impl.FsDatasetImpl (FsVolumeList.java:run(198)) - Adding replicas to map for block pool BP-1775344991-172.17.0.2-1504116157738 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2... 2017-08-30 18:02:37,980 [Thread-1032] INFO impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(770)) - Replica Cache file: /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current/BP-1775344991-172.17.0.2-1504116157738/current/replicas doesn't exist 2017-08-30 18:02:37,980 [Thread-1033] INFO impl.BlockPoolSlice (BlockPoolSlice.java:readReplicasFromCache(770)) - Replica Cache file: /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2/current/BP-1775344991-172.17.0.2-1504116157738/current/replicas doesn't exist 2017-08-30 18:02:37,980 [Thread-1032] INFO impl.FsDatasetImpl (FsVolumeList.java:run(203)) - Time to add replicas to map for block pool BP-1775344991-172.17.0.2-1504116157738 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1: 0ms 2017-08-30 18:02:37,980 [Thread-1033] INFO impl.FsDatasetImpl (FsVolumeList.java:run(203)) - Time to add replicas to map for block pool BP-1775344991-172.17.0.2-1504116157738 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2: 0ms 2017-08-30 18:02:37,980 [Thread-997] INFO impl.FsDatasetImpl (FsVolumeList.java:getAllVolumesMap(229)) - Total time to add all replicas to map: 1ms 2017-08-30 18:02:37,980 [main] INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null 2017-08-30 18:02:37,980 [main] INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(198)) - Starting BPOfferServices for nameservices: <default> 2017-08-30 18:02:37,980 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1)] INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(382)) - Now scanning bpid BP-1775344991-172.17.0.2-1504116157738 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1 2017-08-30 18:02:37,981 [Thread-997] INFO datanode.DirectoryScanner (DirectoryScanner.java:start(281)) - Periodic Directory Tree Verification scan starting at 8/30/17 8:12 PM with interval of 21600000ms 2017-08-30 18:02:37,980 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2)] INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(382)) - Now scanning bpid BP-1775344991-172.17.0.2-1504116157738 on volume /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2 2017-08-30 18:02:37,981 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1)] INFO datanode.VolumeScanner (VolumeScanner.java:runLoop(538)) - VolumeScanner(/testptch/hadoop/h ...[truncated 43082 chars]... :02:38,122 [IPC Server handler 0 on 47551] INFO net.NetworkTopology (NetworkTopology.java:add(145)) - Adding a new node: /default-rack/127.0.0.1:46540 2017-08-30 18:02:38,122 [IPC Server handler 0 on 47551] INFO blockmanagement.BlockReportLeaseManager (BlockReportLeaseManager.java:registerNode(204)) - Registered DN 303e97cb-9549-4084-8530-e6f511ebcbe1 (127.0.0.1:46540). 2017-08-30 18:02:38,123 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO datanode.DataNode (BPServiceActor.java:register(782)) - Block pool Block pool BP-1775344991-172.17.0.2-1504116157738 (Datanode Uuid 303e97cb-9549-4084-8530-e6f511ebcbe1) service to localhost/127.0.0.1:47551 successfully registered with NN 2017-08-30 18:02:38,123 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO datanode.DataNode (BPServiceActor.java:offerService(612)) - For namenode localhost/127.0.0.1:47551 using BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000 2017-08-30 18:02:38,126 [IPC Server handler 1 on 47551] INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(937)) - Adding new storage ID DS-3f87b7ae-9ac7-4485-885e-fc9d2475df21 for DN 127.0.0.1:46540 2017-08-30 18:02:38,126 [IPC Server handler 1 on 47551] INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(937)) - Adding new storage ID DS-a58905ea-f4b4-4b7f-8c16-587ae6b9033b for DN 127.0.0.1:46540 2017-08-30 18:02:38,127 [Block report processor] INFO BlockStateChange (BlockManager.java:processReport(2437)) - BLOCK* processReport 0x703473b98bf5005f: Processing first storage report for DS-a58905ea-f4b4-4b7f-8c16-587ae6b9033b from datanode 303e97cb-9549-4084-8530-e6f511ebcbe1 2017-08-30 18:02:38,128 [Block report processor] INFO BlockStateChange (BlockManager.java:processReport(2463)) - BLOCK* processReport 0x703473b98bf5005f: from storage DS-a58905ea-f4b4-4b7f-8c16-587ae6b9033b node DatanodeRegistration(127.0.0.1:46540, datanodeUuid=303e97cb-9549-4084-8530-e6f511ebcbe1, infoPort=41583, infoSecurePort=0, ipcPort=56578, storageInfo=lv=-57;cid=testClusterID;nsid=785578744;c=1504116157738), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2017-08-30 18:02:38,128 [Block report processor] INFO BlockStateChange (BlockManager.java:processReport(2437)) - BLOCK* processReport 0x703473b98bf5005f: Processing first storage report for DS-3f87b7ae-9ac7-4485-885e-fc9d2475df21 from datanode 303e97cb-9549-4084-8530-e6f511ebcbe1 2017-08-30 18:02:38,128 [Block report processor] INFO BlockStateChange (BlockManager.java:processReport(2463)) - BLOCK* processReport 0x703473b98bf5005f: from storage DS-3f87b7ae-9ac7-4485-885e-fc9d2475df21 node DatanodeRegistration(127.0.0.1:46540, datanodeUuid=303e97cb-9549-4084-8530-e6f511ebcbe1, infoPort=41583, infoSecurePort=0, ipcPort=56578, storageInfo=lv=-57;cid=testClusterID;nsid=785578744;c=1504116157738), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2017-08-30 18:02:38,129 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO datanode.DataNode (BPServiceActor.java:blockReport(422)) - Successfully sent block report 0x703473b98bf5005f, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 2 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2017-08-30 18:02:38,129 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO datanode.DataNode (BPOfferService.java:processCommandFromActive(745)) - Got finalize command for block pool BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:38,160 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2621)) - Cluster is active 2017-08-30 18:02:38,163 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2621)) - Cluster is active 2017-08-30 18:02:38,168 [IPC Server handler 5 on 47551] INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7598)) - allowed=true ugi=jenkins (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/jenkins/noheartbeat.dat dst=null perm=jenkins:supergroup:rw-r--r-- proto=rpc 2017-08-30 18:02:38,174 [IPC Server handler 6 on 47551] INFO hdfs.StateChange (FSDirWriteFileOp.java:logAllocatedBlock(789)) - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:46794, 127.0.0.1:51925 for /user/jenkins/noheartbeat.dat 2017-08-30 18:02:38,177 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:54220 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 src: /127.0.0.1:54220 dest: /127.0.0.1:46794 2017-08-30 18:02:38,178 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:56224 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 src: /127.0.0.1:56224 dest: /127.0.0.1:51925 2017-08-30 18:02:39,181 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:56224 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (BlockReceiver.java:receiveBlock(1001)) - Exception for BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 java.net.SocketTimeoutException: 1000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:51925 remote=/127.0.0.1:56224] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:208) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:522) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:962) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:900) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:39,182 [PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1461)) - PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=LAST_IN_PIPELINE: Thread is interrupted. 2017-08-30 18:02:39,182 [PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=LAST_IN_PIPELINE] INFO datanode.DataNode (BlockReceiver.java:run(1497)) - PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating 2017-08-30 18:02:39,182 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:56224 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(933)) - opWriteBlock BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 received exception java.net.SocketTimeoutException: 1000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:51925 remote=/127.0.0.1:56224] 2017-08-30 18:02:39,183 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:56224 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] ERROR datanode.DataNode (DataXceiver.java:run(323)) - 127.0.0.1:51925:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56224 dst: /127.0.0.1:51925 java.net.SocketTimeoutException: 1000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:51925 remote=/127.0.0.1:56224] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:208) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:522) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:962) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:900) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:39,183 [PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:51925]] INFO datanode.DataNode (BlockReceiver.java:packetSentInTime(393)) - A packet was last sent 1006 milliseconds ago. 2017-08-30 18:02:39,186 [PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:51925]] WARN datanode.DataNode (BlockReceiver.java:run(1436)) - The downstream error might be due to congestion in upstream including this node. Propagating the error: java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:444) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1375) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:39,186 [PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:51925]] WARN datanode.DataNode (BlockReceiver.java:run(1480)) - IOException in BlockReceiver.run(): java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:444) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1375) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:39,186 [PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:51925]] INFO datanode.DataNode (BlockReceiver.java:run(1483)) - PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:51925] java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:444) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1375) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:39,186 [PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:51925]] INFO datanode.DataNode (BlockReceiver.java:run(1497)) - PacketResponder: BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:51925] terminating 2017-08-30 18:02:39,186 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:54220 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (BlockReceiver.java:handleMirrorOutError(447)) - DatanodeRegistration(127.0.0.1:46794, datanodeUuid=d7ee1847-d046-47ea-9a05-7fd25bcb1e86, infoPort=52625, infoSecurePort=0, ipcPort=58248, storageInfo=lv=-57;cid=testClusterID;nsid=785578744;c=1504116157738):Exception writing BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 to mirror 127.0.0.1:51925 java.io.IOException: Interrupted while sleeping. Bailing out. at org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery$6.stopSendingPacketDownstream(TestClientProtocolForPipelineRecovery.java:558) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:581) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:962) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:900) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:49,187 [ResponseProcessor for block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001] WARN hdfs.DataStreamer (DataStreamer.java:run(1214)) - Exception for BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 java.net.SocketTimeoutException: 11000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:54220 remote=/127.0.0.1:46794] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:442) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1104) 2017-08-30 18:02:49,188 [DataStreamer for file /user/jenkins/noheartbeat.dat block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001] WARN hdfs.DataStreamer (DataStreamer.java:handleBadDatanode(1560)) - Error Recovery for BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46794,DS-230ffa8a-d07d-4ac0-86e2-9acc7403dc9d,DISK], DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46794,DS-230ffa8a-d07d-4ac0-86e2-9acc7403dc9d,DISK]) is bad. 2017-08-30 18:02:49,188 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:54220 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (BlockReceiver.java:receiveBlock(1001)) - Exception for BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:522) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:962) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:900) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:49,188 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:54220 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(933)) - opWriteBlock BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 received exception java.io.IOException: Premature EOF from inputStream 2017-08-30 18:02:49,189 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:54220 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] ERROR datanode.DataNode (DataXceiver.java:run(323)) - 127.0.0.1:46794:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54220 dst: /127.0.0.1:46794 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:522) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:962) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:900) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:49,214 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:55316 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 src: /127.0.0.1:55316 dest: /127.0.0.1:46540 2017-08-30 18:02:49,214 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:55315 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 src: /127.0.0.1:55315 dest: /127.0.0.1:46540 2017-08-30 18:02:49,215 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:55316 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO impl.FsDatasetImpl (FsDatasetImpl.java:recoverRbw(1376)) - Recover RBW replica BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 2017-08-30 18:02:49,215 [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@47d9e1e8] INFO datanode.DataNode (DataNode.java:run(2520)) - DataTransfer, at 127.0.0.1:51925: Transmitted BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 (numBytes=0) to /127.0.0.1:46540 2017-08-30 18:02:49,215 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:55316 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO datanode.DataNode (DataXceiver.java:writeBlock(933)) - opWriteBlock BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot recover a non-RBW replica LocalReplicaInPipeline, blk_1073741825_1001, TEMPORARY getNumBytes() = 0 getBytesOnDisk() = 0 getVisibleLength()= -1 getVolume() = /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data5 getBlockURI() = file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data5/current/BP-1775344991-172.17.0.2-1504116157738/tmp/blk_1073741825 bytesAcked=0 bytesOnDisk=0 2017-08-30 18:02:49,215 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:55315 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] INFO impl.FsDatasetImpl (FsDatasetImpl.java:convertTemporaryToRbw(1470)) - Convert BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 from Temporary to RBW, visible length=0 2017-08-30 18:02:49,215 [DataXceiver for client DFSClient_NONMAPREDUCE_-172490240_1 at /127.0.0.1:55316 [Receiving block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001]] ERROR datanode.DataNode (DataXceiver.java:run(323)) - 127.0.0.1:46540:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55316 dst: /127.0.0.1:46540 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot recover a non-RBW replica LocalReplicaInPipeline, blk_1073741825_1001, TEMPORARY getNumBytes() = 0 getBytesOnDisk() = 0 getVisibleLength()= -1 getVolume() = /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data5 getBlockURI() = file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data5/current/BP-1775344991-172.17.0.2-1504116157738/tmp/blk_1073741825 bytesAcked=0 bytesOnDisk=0 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverRbw(FsDatasetImpl.java:1385) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:221) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1291) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:748) 2017-08-30 18:02:49,216 [DataStreamer for file /user/jenkins/noheartbeat.dat block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001] INFO hdfs.DataStreamer (DataStreamer.java:createBlockOutputStream(1778)) - Exception in createBlockOutputStream blk_1073741825_1001 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:444) at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1751) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1496) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684) 2017-08-30 18:02:49,217 [DataStreamer for file /user/jenkins/noheartbeat.dat block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001] WARN hdfs.DataStreamer (DataStreamer.java:handleBadDatanode(1560)) - Error Recovery for BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46540,DS-a58905ea-f4b4-4b7f-8c16-587ae6b9033b,DISK], DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46540,DS-a58905ea-f4b4-4b7f-8c16-587ae6b9033b,DISK]) is bad. 2017-08-30 18:02:49,218 [IPC Server handler 0 on 47551] WARN blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(405)) - Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2017-08-30 18:02:49,219 [IPC Server handler 0 on 47551] WARN blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(405)) - Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2017-08-30 18:02:49,348 [IPC Server handler 0 on 47551] WARN protocol.BlockStoragePolicy (BlockStoragePolicy.java:chooseStorageTypes(161)) - Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2017-08-30 18:02:49,348 [IPC Server handler 0 on 47551] WARN blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(405)) - Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2017-08-30 18:02:49,354 [DataStreamer for file /user/jenkins/noheartbeat.dat block BP-1775344991-172.17.0.2-1504116157738:blk_1073741825_1001] WARN hdfs.DataStreamer (DataStreamer.java:run(844)) - DataStreamer Exception java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]). The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1322) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1388) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1587) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1488) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684) 2017-08-30 18:02:49,355 [main] WARN hdfs.DFSClient (DFSOutputStream.java:flushOrSync(729)) - Error while syncing java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]). The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1322) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1388) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1587) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1488) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684) 2017-08-30 18:02:49,357 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1957)) - Shutting down the Mini HDFS Cluster 2017-08-30 18:02:49,357 [main] ERROR hdfs.DFSClient (DFSClient.java:closeAllFilesBeingWritten(590)) - Failed to close file: /user/jenkins/noheartbeat.dat with inode: 16388 java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:51925,DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4,DISK]]). The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1322) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1388) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1587) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1488) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1470) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1274) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684) 2017-08-30 18:02:49,357 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2005)) - Shutting down DataNode 2 2017-08-30 18:02:49,358 [main] WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(340)) - DirectoryScanner: shutdown has been called 2017-08-30 18:02:49,358 [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@70c0a3d5] INFO datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all peers. 2017-08-30 18:02:49,359 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data6)] INFO datanode.VolumeScanner (VolumeScanner.java:run(636)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data6, DS-a58905ea-f4b4-4b7f-8c16-587ae6b9033b) exiting. 2017-08-30 18:02:49,359 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data5)] INFO datanode.VolumeScanner (VolumeScanner.java:run(636)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data5, DS-3f87b7ae-9ac7-4485-885e-fc9d2475df21) exiting. 2017-08-30 18:02:49,366 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@36c0d0bd{/,null,UNAVAILABLE}{/datanode} 2017-08-30 18:02:49,368 [main] INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@5e1fc2aa{HTTP/1.1,[http/1.1]}{localhost:0} 2017-08-30 18:02:49,368 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@615e3f51{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE} 2017-08-30 18:02:49,368 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@2e51d054{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE} 2017-08-30 18:02:49,372 [main] INFO ipc.Server (Server.java:stop(3074)) - Stopping server on 56578 2017-08-30 18:02:49,372 [IPC Server listener on 56578] INFO ipc.Server (Server.java:run(1178)) - Stopping IPC Server listener on 56578 2017-08-30 18:02:49,373 [IPC Server Responder] INFO ipc.Server (Server.java:run(1312)) - Stopping IPC Server Responder 2017-08-30 18:02:49,373 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] WARN datanode.IncrementalBlockReportManager (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - IncrementalBlockReportManager interrupted 2017-08-30 18:02:49,374 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] WARN datanode.DataNode (BPServiceActor.java:run(852)) - Ending block pool service for: Block pool BP-1775344991-172.17.0.2-1504116157738 (Datanode Uuid 303e97cb-9549-4084-8530-e6f511ebcbe1) service to localhost/127.0.0.1:47551 2017-08-30 18:02:49,374 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1775344991-172.17.0.2-1504116157738 (Datanode Uuid 303e97cb-9549-4084-8530-e6f511ebcbe1) 2017-08-30 18:02:49,374 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2675)) - Removing block pool BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:49,375 [refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data5/current/BP-1775344991-172.17.0.2-1504116157738] WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk information: sleep interrupted 2017-08-30 18:02:49,375 [refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data6/current/BP-1775344991-172.17.0.2-1504116157738] WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk information: sleep interrupted 2017-08-30 18:02:49,375 [main] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - Shutting down all async disk service threads 2017-08-30 18:02:49,376 [main] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(199)) - All async disk service threads have been shut down 2017-08-30 18:02:49,376 [main] INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(176)) - Shutting down all async lazy persist service threads 2017-08-30 18:02:49,376 [main] INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(183)) - All async lazy persist service threads have been shut down 2017-08-30 18:02:49,377 [main] INFO datanode.DataNode (DataNode.java:shutdown(2109)) - Shutdown complete. 2017-08-30 18:02:49,377 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2005)) - Shutting down DataNode 1 2017-08-30 18:02:49,377 [main] WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(340)) - DirectoryScanner: shutdown has been called 2017-08-30 18:02:49,377 [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@44c13103] INFO datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all peers. 2017-08-30 18:02:49,377 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data3)] INFO datanode.VolumeScanner (VolumeScanner.java:run(636)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data3, DS-274e8cc9-280b-4370-b494-6a4f0d67ccf4) exiting. 2017-08-30 18:02:49,378 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data4)] INFO datanode.VolumeScanner (VolumeScanner.java:run(636)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data4, DS-f402c2a1-2faf-489f-9a91-673666976433) exiting. 2017-08-30 18:02:49,382 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@42b28ff1{/,null,UNAVAILABLE}{/datanode} 2017-08-30 18:02:49,383 [main] INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@36061cf3{HTTP/1.1,[http/1.1]}{localhost:0} 2017-08-30 18:02:49,384 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@481e91b6{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE} 2017-08-30 18:02:49,384 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@6631cb64{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE} 2017-08-30 18:02:49,385 [main] INFO ipc.Server (Server.java:stop(3074)) - Stopping server on 50140 2017-08-30 18:02:49,386 [IPC Server listener on 50140] INFO ipc.Server (Server.java:run(1178)) - Stopping IPC Server listener on 50140 2017-08-30 18:02:49,386 [IPC Server Responder] INFO ipc.Server (Server.java:run(1312)) - Stopping IPC Server Responder 2017-08-30 18:02:49,386 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] WARN datanode.IncrementalBlockReportManager (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - IncrementalBlockReportManager interrupted 2017-08-30 18:02:49,387 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] WARN datanode.DataNode (BPServiceActor.java:run(852)) - Ending block pool service for: Block pool BP-1775344991-172.17.0.2-1504116157738 (Datanode Uuid 0bd5d006-5456-41c3-b8c3-c9c2a666a83a) service to localhost/127.0.0.1:47551 2017-08-30 18:02:49,387 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1775344991-172.17.0.2-1504116157738 (Datanode Uuid 0bd5d006-5456-41c3-b8c3-c9c2a666a83a) 2017-08-30 18:02:49,387 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2675)) - Removing block pool BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:49,388 [refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data3/current/BP-1775344991-172.17.0.2-1504116157738] WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk information: sleep interrupted 2017-08-30 18:02:49,388 [refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data4/current/BP-1775344991-172.17.0.2-1504116157738] WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk information: sleep interrupted 2017-08-30 18:02:49,389 [main] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - Shutting down all async disk service threads 2017-08-30 18:02:49,389 [main] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(199)) - All async disk service threads have been shut down 2017-08-30 18:02:49,389 [main] INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(176)) - Shutting down all async lazy persist service threads 2017-08-30 18:02:49,389 [main] INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(183)) - All async lazy persist service threads have been shut down 2017-08-30 18:02:49,390 [main] INFO datanode.DataNode (DataNode.java:shutdown(2109)) - Shutdown complete. 2017-08-30 18:02:49,390 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNode(2005)) - Shutting down DataNode 0 2017-08-30 18:02:49,390 [main] WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(340)) - DirectoryScanner: shutdown has been called 2017-08-30 18:02:49,390 [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3e17a0a1] INFO datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all peers. 2017-08-30 18:02:49,391 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2)] INFO datanode.VolumeScanner (VolumeScanner.java:run(636)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2, DS-230ffa8a-d07d-4ac0-86e2-9acc7403dc9d) exiting. 2017-08-30 18:02:49,391 [VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1)] INFO datanode.VolumeScanner (VolumeScanner.java:run(636)) - VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1, DS-c7f21b9d-2106-4d12-bb20-c414e1397d77) exiting. 2017-08-30 18:02:49,398 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@29eda4f8{/,null,UNAVAILABLE}{/datanode} 2017-08-30 18:02:49,399 [main] INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@1b9776f5{HTTP/1.1,[http/1.1]}{localhost:0} 2017-08-30 18:02:49,400 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@7a7cc52c{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE} 2017-08-30 18:02:49,400 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@4e1ce44{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE} 2017-08-30 18:02:49,402 [main] INFO ipc.Server (Server.java:stop(3074)) - Stopping server on 58248 2017-08-30 18:02:49,403 [IPC Server listener on 58248] INFO ipc.Server (Server.java:run(1178)) - Stopping IPC Server listener on 58248 2017-08-30 18:02:49,403 [IPC Server Responder] INFO ipc.Server (Server.java:run(1312)) - Stopping IPC Server Responder 2017-08-30 18:02:49,403 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] WARN datanode.IncrementalBlockReportManager (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - IncrementalBlockReportManager interrupted 2017-08-30 18:02:49,404 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] WARN datanode.DataNode (BPServiceActor.java:run(852)) - Ending block pool service for: Block pool BP-1775344991-172.17.0.2-1504116157738 (Datanode Uuid d7ee1847-d046-47ea-9a05-7fd25bcb1e86) service to localhost/127.0.0.1:47551 2017-08-30 18:02:49,404 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool BP-1775344991-172.17.0.2-1504116157738 (Datanode Uuid d7ee1847-d046-47ea-9a05-7fd25bcb1e86) 2017-08-30 18:02:49,404 [BP-1775344991-172.17.0.2-1504116157738 heartbeating to localhost/127.0.0.1:47551] INFO impl.FsDatasetImpl (FsDatasetImpl.java:shutdownBlockPool(2675)) - Removing block pool BP-1775344991-172.17.0.2-1504116157738 2017-08-30 18:02:49,405 [refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current/BP-1775344991-172.17.0.2-1504116157738] WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk information: sleep interrupted 2017-08-30 18:02:49,405 [refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data2/current/BP-1775344991-172.17.0.2-1504116157738] WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread Interrupted waiting to refresh disk information: sleep interrupted 2017-08-30 18:02:49,406 [main] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(191)) - Shutting down all async disk service threads 2017-08-30 18:02:49,406 [main] INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:shutdown(199)) - All async disk service threads have been shut down 2017-08-30 18:02:49,406 [main] INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(176)) - Shutting down all async lazy persist service threads 2017-08-30 18:02:49,407 [main] INFO impl.RamDiskAsyncLazyPersistService (RamDiskAsyncLazyPersistService.java:shutdown(183)) - All async lazy persist service threads have been shut down 2017-08-30 18:02:49,407 [main] INFO datanode.DataNode (DataNode.java:shutdown(2109)) - Shutdown complete. 2017-08-30 18:02:49,407 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:stopAndJoinNameNode(2038)) - Shutting down the namenode 2017-08-30 18:02:49,407 [main] INFO namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1301)) - Stopping services started for active state 2017-08-30 18:02:49,408 [main] INFO namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1369)) - Ending log segment 1, 8 2017-08-30 18:02:49,408 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@ef1695a] INFO namenode.FSNamesystem (FSNamesystem.java:run(3985)) - LazyPersistFileScrubber was interrupted, exiting 2017-08-30 18:02:49,408 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@238bfd6c] INFO namenode.FSNamesystem (FSNamesystem.java:run(3894)) - NameNodeEditLogRoller was interrupted, exiting 2017-08-30 18:02:49,408 [main] INFO namenode.FSEditLog (FSEditLog.java:logSyncAll(600)) - logSyncAll toSyncToTxId=9 lastSyncedTxid=8 mostRecentTxid=9 2017-08-30 18:02:49,408 [main] INFO namenode.FSEditLog (FSEditLog.java:logSyncAll(605)) - Done logSyncAll lastWrittenTxId=9 lastSyncedTxid=9 mostRecentTxid=9 2017-08-30 18:02:49,408 [main] INFO namenode.FSEditLog (FSEditLog.java:printStatistics(770)) - Number of transactions: 9 Total time for transactions(ms): 3 Number of transactions batched in Syncs: 4 Number of syncs: 6 SyncTimes(ms): 2 1 2017-08-30 18:02:49,409 [main] INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1/current/edits_inprogress_0000000000000000001 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-1/current/edits_0000000000000000001-0000000000000000009 2017-08-30 18:02:49,410 [main] INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2/current/edits_inprogress_0000000000000000001 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-2/current/edits_0000000000000000001-0000000000000000009 2017-08-30 18:02:49,410 [CacheReplicationMonitor(1864849826)] INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(169)) - Shutting down CacheReplicationMonitor 2017-08-30 18:02:49,411 [main] INFO ipc.Server (Server.java:stop(3074)) - Stopping server on 47551 2017-08-30 18:02:49,412 [IPC Server listener on 47551] INFO ipc.Server (Server.java:run(1178)) - Stopping IPC Server listener on 47551 2017-08-30 18:02:49,414 [IPC Server Responder] INFO ipc.Server (Server.java:run(1312)) - Stopping IPC Server Responder 2017-08-30 18:02:49,415 [StorageInfoMonitor] INFO blockmanagement.BlockManager (BlockManager.java:run(4450)) - Stopping thread. 2017-08-30 18:02:49,415 [RedundancyMonitor] INFO blockmanagement.BlockManager (BlockManager.java:run(4415)) - Stopping RedundancyMonitor. 2017-08-30 18:02:49,422 [main] INFO namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1301)) - Stopping services started for active state 2017-08-30 18:02:49,423 [main] INFO namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1397)) - Stopping services started for standby state 2017-08-30 18:02:49,424 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@5633dafd{/,null,UNAVAILABLE}{/hdfs} 2017-08-30 18:02:49,425 [main] INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@5d5160e6{HTTP/1.1,[http/1.1]}{localhost:0} 2017-08-30 18:02:49,425 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@5e8cda75{/static,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE} 2017-08-30 18:02:49,425 [main] INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@46c3a14d{/logs,file:///testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE} 2017-08-30 18:02:49,426 [main] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping DataNode metrics system... 2017-08-30 18:02:49,428 [main] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - DataNode metrics system stopped. 2017-08-30 18:02:49,428 [main] INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - DataNode metrics system shutdown complete. {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org