[ 
https://issues.apache.org/jira/browse/HDFS-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16587833#comment-16587833
 ] 

Shweta commented on HDFS-13837:
-------------------------------

[~xiaochen], as suggested, I have added the patch for the flaky test. The 
logging helps in understanding the execution of the Lease Renewer. Please 
review and suggest if any further details need to be added. Thank you.

> hdfs.TestDistributedFileSystem.testDFSClient: test is flaky
> -----------------------------------------------------------
>
>                 Key: HDFS-13837
>                 URL: https://issues.apache.org/jira/browse/HDFS-13837
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>            Reporter: Shweta
>            Assignee: Shweta
>            Priority: Major
>         Attachments: HDFS-13837.001.patch, 
> TestDistributedFileSystem.testDFSClient_Stderr_log
>
>
> Stack Trace :
> {noformat} 
> java.lang.AssertionError
>  at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClient(TestDistributedFileSystem.java:449)
> {noformat}
>   Stdout:
> {noformat}
> [truncated]kmanagement.BlockManager 
> (BlockManager.java:processMisReplicatesAsync(3385)) - Number of blocks being 
> written = 0
>  2018-07-31 21:42:46,675 [Reconstruction Queue Initializer] INFO 
> hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3388)) - STATE* 
> Replication Queue initialization scan for invalid, over- and under-replicated 
> blocks completed in 5 msec
>  2018-07-31 21:42:46,676 [IPC Server Responder] INFO ipc.Server 
> (Server.java:run(1307)) - IPC Server Responder: starting
>  2018-07-31 21:42:46,676 [IPC Server listener on port1] INFO ipc.Server 
> (Server.java:run(1146)) - IPC Server listener on port1: starting
>  2018-07-31 21:42:46,678 [main] INFO namenode.NameNode 
> (NameNode.java:startCommonServices(831)) - NameNode RPC up at: 
> localhost/x.x.x.x:port1
>  2018-07-31 21:42:46,678 [main] INFO namenode.FSNamesystem 
> (FSNamesystem.java:startActiveServices(1230)) - Starting services required 
> for active state
>  2018-07-31 21:42:46,678 [main] INFO namenode.FSDirectory 
> (FSDirectory.java:updateCountForQuota(758)) - Initializing quota with 4 
> thread(s)
>  2018-07-31 21:42:46,679 [main] INFO namenode.FSDirectory 
> (FSDirectory.java:updateCountForQuota(767)) - Quota initialization completed 
> in 0 milliseconds
>  name space=1
>  storage space=0
>  storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
>  2018-07-31 21:42:46,682 [CacheReplicationMonitor(1111752355)] INFO 
> blockmanagement.CacheReplicationMonitor 
> (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor 
> with interval 30000 milliseconds
>  2018-07-31 21:42:46,686 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:startDataNodes(1599)) - Starting DataNode 0 with 
> dfs.datanode.data.dir: 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,[DISK][file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2|file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2]
>  2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  2018-07-31 21:42:46,687 [main] INFO checker.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
>  2018-07-31 21:42:46,695 [main] INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again)
>  2018-07-31 21:42:46,695 [main] INFO common.Util 
> (Util.java:isDiskStatsEnabled(395)) - 
> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
> profiling
>  2018-07-31 21:42:46,695 [main] INFO datanode.BlockScanner 
> (BlockScanner.java:<init>(184)) - Initialized block scanner with 
> targetBytesPerSec 1048576
>  2018-07-31 21:42:46,696 [main] INFO datanode.DataNode 
> (DataNode.java:<init>(496)) - Configured hostname is x.x.x.x
>  2018-07-31 21:42:46,696 [main] INFO common.Util 
> (Util.java:isDiskStatsEnabled(395)) - 
> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
> profiling
>  2018-07-31 21:42:46,696 [main] INFO datanode.DataNode 
> (DataNode.java:startDataNode(1385)) - Starting DataNode with maxLockedMemory 
> = 0
>  2018-07-31 21:42:46,697 [main] INFO datanode.DataNode 
> (DataNode.java:initDataXceiver(1142)) - Opened streaming server at 
> /x.x.x.x:port2
>  2018-07-31 21:42:46,697 [main] INFO datanode.DataNode 
> (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 10485760 bytes/s
>  2018-07-31 21:42:46,697 [main] INFO datanode.DataNode 
> (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
>  2018-07-31 21:42:46,699 [main] INFO server.AuthenticationFilter 
> (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to 
> initialize FileSignerSecretProvider, falling back to use random secrets.
>  2018-07-31 21:42:46,699 [main] INFO http.HttpRequestLog 
> (HttpRequestLog.java:getRequestLog(81)) - Http request log for 
> http.requests.datanode is not defined
>  2018-07-31 21:42:46,700 [main] INFO http.HttpServer2 
> (HttpServer2.java:addGlobalFilter(923)) - Added global filter 'safety' 
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>  2018-07-31 21:42:46,701 [main] INFO http.HttpServer2 
> (HttpServer2.java:addFilter(896)) - Added filter static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context datanode
>  2018-07-31 21:42:46,701 [main] INFO http.HttpServer2 
> (HttpServer2.java:addFilter(906)) - Added filter static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context static
>  2018-07-31 21:42:46,701 [main] INFO http.HttpServer2 
> (HttpServer2.java:addFilter(906)) - Added filter static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context logs
>  2018-07-31 21:42:46,702 [main] INFO http.HttpServer2 
> (HttpServer2.java:bindListener(1123)) - Jetty bound to port 35341
>  2018-07-31 21:42:46,702 [main] INFO server.Server (Server.java:doStart(346)) 
> - jetty-9.3.20.v20170531
>  2018-07-31 21:42:46,704 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStart(781)) - Started 
> o.e.j.s.ServletContextHandler@38b5f25
> {/logs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE}
> 2018-07-31 21:42:46,705 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStart(781)) - Started 
> o.e.j.s.ServletContextHandler@67594471
> {/static,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,AVAILABLE}
> 2018-07-31 21:42:46,711 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@68fe48d7
> {/,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/datanode/,AVAILABLE}
>  \{/datanode}
>  2018-07-31 21:42:46,712 [main] INFO server.AbstractConnector 
> (AbstractConnector.java:doStart(278)) - Started 
> ServerConnector@379ce046\{HTTP/1.1,[http/1.1]}{localhost:35341}
>  2018-07-31 21:42:46,712 [main] INFO server.Server (Server.java:doStart(414)) 
> - Started @27029ms
>  2018-07-31 21:42:46,715 [main] INFO web.DatanodeHttpServer 
> (DatanodeHttpServer.java:start(239)) - Listening HTTP traffic on 
> /x.x.x.x:port3
>  2018-07-31 21:42:46,716 [main] INFO datanode.DataNode 
> (DataNode.java:startDataNode(1412)) - dnUserName = jenkins
>  2018-07-31 21:42:46,716 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@10acd6] INFO 
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause 
> monitor
>  2018-07-31 21:42:46,716 [main] INFO datanode.DataNode 
> (DataNode.java:startDataNode(1413)) - supergroup = supergroup
>  2018-07-31 21:42:46,717 [main] INFO ipc.CallQueueManager 
> (CallQueueManager.java:<init>(84)) - Using callQueue: class 
> java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class 
> org.apache.hadoop.ipc.DefaultRpcScheduler
>  2018-07-31 21:42:46,718 [Socket Reader #1 for port port4|#1 for port port4] 
> INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for port 
> port4
>  2018-07-31 21:42:46,719 [main] INFO datanode.DataNode 
> (DataNode.java:initIpcServer(1029)) - Opened IPC server at /x.x.x.x:port4
>  2018-07-31 21:42:46,721 [main] INFO datanode.DataNode 
> (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for 
> nameservices: null
>  2018-07-31 21:42:46,721 [main] INFO datanode.DataNode 
> (BlockPoolManager.java:doRefreshNamenodes(198)) - Starting BPOfferServices 
> for nameservices: <default>
>  2018-07-31 21:42:46,722 [Thread-2206] INFO datanode.DataNode 
> (BPServiceActor.java:run(809)) - Block pool <registering> (Datanode Uuid 
> unassigned) service to localhost/x.x.x.x:port1 starting to offer service
>  2018-07-31 21:42:46,724 [IPC Server Responder] INFO ipc.Server 
> (Server.java:run(1307)) - IPC Server Responder: starting
>  2018-07-31 21:42:46,726 [IPC Server listener on port4] INFO ipc.Server 
> (Server.java:run(1146)) - IPC Server listener on port4: starting
>  2018-07-31 21:42:46,729 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:startDataNodes(1599)) - Starting DataNode 1 with 
> dfs.datanode.data.dir: 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3,[DISK][file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4|file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4]
>  2018-07-31 21:42:46,730 [Thread-2206] INFO datanode.DataNode 
> (BPOfferService.java:verifyAndSetNamespaceInfo(378)) - Acknowledging ACTIVE 
> Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) 
> service to localhost/x.x.x.x:port1
>  2018-07-31 21:42:46,730 [main] INFO checker.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
>  2018-07-31 21:42:46,731 [main] INFO checker.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4
>  2018-07-31 21:42:46,732 [Thread-2206] INFO common.Storage 
> (DataStorage.java:getParallelVolumeLoadThreadsNum(346)) - Using 2 threads to 
> upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, 
> dataDirs=2)
>  2018-07-31 21:42:46,735 [Thread-2206] INFO common.Storage 
> (Storage.java:tryLock(847)) - Lock on 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/in_use.lock
>  acquired by nodename [email protected]
>  2018-07-31 21:42:46,736 [Thread-2206] INFO common.Storage 
> (DataStorage.java:loadStorageDirectory(273)) - Storage directory with 
> location 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  is not formatted for namespace 1027244560. Formatting...
>  2018-07-31 21:42:46,736 [Thread-2206] INFO common.Storage 
> (DataStorage.java:createStorageID(150)) - Generated new storageID 
> DS-1d19d47a-b472-463a-a1ed-43be073676f5 for directory 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  2018-07-31 21:42:46,740 [main] INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:init(158)) - DataNode metrics system started (again)
>  2018-07-31 21:42:46,740 [main] INFO common.Util 
> (Util.java:isDiskStatsEnabled(395)) - 
> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
> profiling
>  2018-07-31 21:42:46,740 [main] INFO datanode.BlockScanner 
> (BlockScanner.java:<init>(184)) - Initialized block scanner with 
> targetBytesPerSec 1048576
>  2018-07-31 21:42:46,740 [main] INFO datanode.DataNode 
> (DataNode.java:<init>(496)) - Configured hostname is x.x.x.x
>  2018-07-31 21:42:46,741 [main] INFO common.Util 
> (Util.java:isDiskStatsEnabled(395)) - 
> dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO 
> profiling
>  2018-07-31 21:42:46,741 [main] INFO datanode.DataNode 
> (DataNode.java:startDataNode(1385)) - Starting DataNode with maxLockedMemory 
> = 0
>  2018-07-31 21:42:46,741 [main] INFO datanode.DataNode 
> (DataNode.java:initDataXceiver(1142)) - Opened streaming server at 
> /x.x.x.x:port5
>  2018-07-31 21:42:46,742 [main] INFO datanode.DataNode 
> (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 10485760 bytes/s
>  2018-07-31 21:42:46,742 [main] INFO datanode.DataNode 
> (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
>  2018-07-31 21:42:46,744 [main] INFO server.AuthenticationFilter 
> (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to 
> initialize FileSignerSecretProvider, falling back to use random secrets.
>  2018-07-31 21:42:46,744 [Thread-2206] INFO common.Storage 
> (Storage.java:tryLock(847)) - Lock on 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/in_use.lock
>  acquired by nodename [email protected]
>  2018-07-31 21:42:46,744 [main] INFO http.HttpRequestLog 
> (HttpRequestLog.java:getRequestLog(81)) - Http request log for 
> http.requests.datanode is not defined
>  2018-07-31 21:42:46,744 [Thread-2206] INFO common.Storage 
> (DataStorage.java:loadStorageDirectory(273)) - Storage directory with 
> location 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
>  is not formatted for namespace 1027244560. Formatting...
>  2018-07-31 21:42:46,744 [Thread-2206] INFO common.Storage 
> (DataStorage.java:createStorageID(150)) - Generated new storageID 
> DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f for directory 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
>  2018-07-31 21:42:46,745 [main] INFO http.HttpServer2 
> (HttpServer2.java:addGlobalFilter(923)) - Added global filter 'safety' 
> (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
>  2018-07-31 21:42:46,746 [main] INFO http.HttpServer2 
> (HttpServer2.java:addFilter(896)) - Added filter static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context datanode
>  2018-07-31 21:42:46,746 [main] INFO http.HttpServer2 
> (HttpServer2.java:addFilter(906)) - Added filter static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context logs
>  2018-07-31 21:42:46,746 [main] INFO http.HttpServer2 
> (HttpServer2.java:addFilter(906)) - Added filter static_user_filter 
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
> context static
>  2018-07-31 21:42:46,746 [main] INFO http.HttpServer2 
> (HttpServer2.java:bindListener(1123)) - Jetty bound to port 35966
>  2018-07-31 21:42:46,747 [main] INFO server.Server (Server.java:doStart(346)) 
> - jetty-9.3.20.v20170531
>  2018-07-31 21:42:46,749 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStart(781)) - Started 
> o.e.j.s.ServletContextHandler@2c6aed22\{/logs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,AVAILABLE}
>  2018-07-31 21:42:46,750 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStart(781)) - Started 
> o.e.j.s.ServletContextHandler@7acfb656\{/static,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,AVAILABLE}
>  2018-07-31 21:42:46,755 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStart(781)) - Started 
> o.e.j.w.WebAppContext@41ffaeb8\{/,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/datanode/,AVAILABLE}{/datanode}
> 2018-07-31 21:42:46,756 [main] INFO server.AbstractConnector 
> (AbstractConnector.java:doStart(278)) - Started ServerConnector@43f0c2d1
> {HTTP/1.1,[http/1.1]}\{localhost:35966}
>  2018-07-31 21:42:46,756 [main] INFO server.Server (Server.java:doStart(414)) 
> - Started @27074ms
>  2018-07-31 21:42:46,760 [main] INFO web.DatanodeHttpServer 
> (DatanodeHttpServer.java:start(239)) - Listening HTTP traffic on 
> /x.x.x.x:port6
>  2018-07-31 21:42:46,761 [main] INFO datanode.DataNode 
> (DataNode.java:startDataNode(1412)) - dnUserName = jenkins
>  2018-07-31 21:42:46,761 [main] INFO datanode.DataNode 
> (DataNode.java:startDataNode(1413)) - supergroup = supergroup
>  2018-07-31 21:42:46,761 [Thread-2206] INFO common.Storage 
> (BlockPoolSliceStorage.java:recoverTransitionRead(252)) - Analyzing storage 
> directories for bpid BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,761 [main] INFO ipc.CallQueueManager 
> (CallQueueManager.java:<init>(84)) - Using callQueue: class 
> java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class 
> org.apache.hadoop.ipc.DefaultRpcScheduler
>  2018-07-31 21:42:46,761 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@4096aa05] INFO 
> util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause 
> monitor
>  2018-07-31 21:42:46,761 [Thread-2206] INFO common.Storage 
> (Storage.java:lock(806)) - Locking is disabled for 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,761 [Thread-2206] INFO common.Storage 
> (BlockPoolSliceStorage.java:loadStorageDirectory(168)) - Block pool storage 
> directory for location 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  and block pool id BP-1309332346-172.26.0.186-1533098566550 is not formatted 
> for BP-1309332346-172.26.0.186-1533098566550. Formatting ...
>  2018-07-31 21:42:46,762 [Thread-2206] INFO common.Storage 
> (BlockPoolSliceStorage.java:format(280)) - Formatting block pool 
> BP-1309332346-172.26.0.186-1533098566550 directory 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-1309332346-172.26.0.186-1533098566550/current
>  2018-07-31 21:42:46,762 [Socket Reader #1 for port port7|#1 for port port7] 
> INFO ipc.Server (Server.java:run(1067)) - Starting Socket Reader #1 for port 
> port7
>  2018-07-31 21:42:46,764 [main] INFO datanode.DataNode 
> (DataNode.java:initIpcServer(1029)) - Opened IPC server at /x.x.x.x:port7
>  2018-07-31 21:42:46,766 [main] INFO datanode.DataNode 
> (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for 
> nameservices: null
>  2018-07-31 21:42:46,766 [main] INFO datanode.DataNode 
> (BlockPoolManager.java:doRefreshNamenodes(198)) - Starting BPOfferServices 
> for nameservices: <default>
>  2018-07-31 21:42:46,767 [Thread-2237] INFO datanode.DataNode 
> (BPServiceActor.java:run(809)) - Block pool <registering> (Datanode Uuid 
> unassigned) service to localhost/x.x.x.x:port1 starting to offer service
>  2018-07-31 21:42:46,768 [IPC Server Responder] INFO ipc.Server 
> (Server.java:run(1307)) - IPC Server Responder: starting
>  2018-07-31 21:42:46,769 [IPC Server listener on port7] INFO ipc.Server 
> (Server.java:run(1146)) - IPC Server listener on port7: starting
>  2018-07-31 21:42:46,770 [Thread-2237] INFO datanode.DataNode 
> (BPOfferService.java:verifyAndSetNamespaceInfo(378)) - Acknowledging ACTIVE 
> Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) 
> service to localhost/x.x.x.x:port1
>  2018-07-31 21:42:46,770 [Thread-2237] INFO common.Storage 
> (DataStorage.java:getParallelVolumeLoadThreadsNum(346)) - Using 2 threads to 
> upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, 
> dataDirs=2)
>  2018-07-31 21:42:46,772 [main] DEBUG hdfs.DFSClient 
> (DFSClient.java:<init>(311)) - Sets 
> dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
>  2018-07-31 21:42:46,775 [IPC Server handler 2 on port1] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7816)) - allowed=true 
> ugi=jenkins (auth:SIMPLE) ip=/x.x.x.x cmd=datanodeReport src=null dst=null 
> perm=null proto=rpc
>  2018-07-31 21:42:46,776 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shouldWait(2653)) - dnInfo.length != numDataNodes
>  2018-07-31 21:42:46,776 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2574)) - Waiting for cluster to become active
>  2018-07-31 21:42:46,780 [Thread-2206] INFO common.Storage 
> (BlockPoolSliceStorage.java:recoverTransitionRead(252)) - Analyzing storage 
> directories for bpid BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,780 [Thread-2206] INFO common.Storage 
> (Storage.java:lock(806)) - Locking is disabled for 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,780 [Thread-2237] INFO common.Storage 
> (Storage.java:tryLock(847)) - Lock on 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/in_use.lock
>  acquired by nodename [email protected]
>  2018-07-31 21:42:46,780 [Thread-2206] INFO common.Storage 
> (BlockPoolSliceStorage.java:loadStorageDirectory(168)) - Block pool storage 
> directory for location 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
>  and block pool id BP-1309332346-172.26.0.186-1533098566550 is not formatted 
> for BP-1309332346-172.26.0.186-1533098566550. Formatting ...
>  2018-07-31 21:42:46,780 [Thread-2206] INFO common.Storage 
> (BlockPoolSliceStorage.java:format(280)) - Formatting block pool 
> BP-1309332346-172.26.0.186-1533098566550 directory 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-1309332346-172.26.0.186-1533098566550/current
>  2018-07-31 21:42:46,780 [Thread-2237] INFO common.Storage 
> (DataStorage.java:loadStorageDirectory(273)) - Storage directory with 
> location 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
>  is not formatted for namespace 1027244560. Formatting...
>  2018-07-31 21:42:46,781 [Thread-2237] INFO common.Storage 
> (DataStorage.java:createStorageID(150)) - Generated new storageID 
> DS-386f4969-f72d-4a3a-baad-ef59d4bc3016 for directory 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
>  2018-07-31 21:42:46,789 [Thread-2206] INFO datanode.DataNode 
> (DataNode.java:initStorage(1693)) - Setting up storage: 
> nsid=1027244560;bpid=BP-1309332346-172.26.0.186-1533098566550;lv=-57;nsInfo=lv=-64;cid=testClusterID;nsid=1027244560;c=1533098566550;bpid=BP-1309332346-172.26.0.186-1533098566550;dnuuid=null
>  2018-07-31 21:42:46,792 [Thread-2237] INFO common.Storage 
> (Storage.java:tryLock(847)) - Lock on 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/in_use.lock
>  acquired by nodename [email protected]
>  2018-07-31 21:42:46,792 [Thread-2237] INFO common.Storage 
> (DataStorage.java:loadStorageDirectory(273)) - Storage directory with 
> location 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4
>  is not formatted for namespace 1027244560. Formatting...
>  2018-07-31 21:42:46,792 [Thread-2237] INFO common.Storage 
> (DataStorage.java:createStorageID(150)) - Generated new storageID 
> DS-cac8e9dc-026d-474d-a76b-57d577c44846 for directory 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4
>  2018-07-31 21:42:46,794 [Thread-2206] INFO datanode.DataNode 
> (DataNode.java:checkDatanodeUuid(1517)) - Generated and persisted new 
> Datanode UUID 785a0dd8-bdae-40c1-94f6-a6b639dafb14
>  2018-07-31 21:42:46,796 [Thread-2206] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:addVolume(306)) - Added new volume: 
> DS-1d19d47a-b472-463a-a1ed-43be073676f5
>  2018-07-31 21:42:46,796 [Thread-2206] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:addVolume(431)) - Added volume - 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,
>  StorageType: DISK
>  2018-07-31 21:42:46,797 [Thread-2206] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:addVolume(306)) - Added new volume: 
> DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f
>  2018-07-31 21:42:46,797 [Thread-2206] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:addVolume(431)) - Added volume - 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2,
>  StorageType: DISK
>  2018-07-31 21:42:46,797 [Thread-2206] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:registerMBean(2169)) - Registered FSDatasetState MBean
>  2018-07-31 21:42:46,798 [Thread-2206] INFO checker.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  2018-07-31 21:42:46,799 [Thread-2206] INFO checker.DatasetVolumeChecker 
> (DatasetVolumeChecker.java:checkAllVolumes(219)) - Scheduled health check for 
> volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  2018-07-31 21:42:46,799 [Thread-2206] INFO checker.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
>  2018-07-31 21:42:46,799 [Thread-2206] INFO checker.DatasetVolumeChecker 
> (DatasetVolumeChecker.java:checkAllVolumes(219)) - Scheduled health check for 
> volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
>  2018-07-31 21:42:46,799 [Thread-2206] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:addBlockPool(2655)) - Adding block pool 
> BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,800 [Thread-2251] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(409)) - Scanning block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1...
>  2018-07-31 21:42:46,801 [Thread-2237] INFO common.Storage 
> (BlockPoolSliceStorage.java:recoverTransitionRead(252)) - Analyzing storage 
> directories for bpid BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,801 [Thread-2237] INFO common.Storage 
> (Storage.java:lock(806)) - Locking is disabled for 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,801 [Thread-2237] INFO common.Storage 
> (BlockPoolSliceStorage.java:loadStorageDirectory(168)) - Block pool storage 
> directory for location 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
>  and block pool id BP-1309332346-172.26.0.186-1533098566550 is not formatted 
> for BP-1309332346-172.26.0.186-1533098566550. Formatting ...
>  2018-07-31 21:42:46,801 [Thread-2237] INFO common.Storage 
> (BlockPoolSliceStorage.java:format(280)) - Formatting block pool 
> BP-1309332346-172.26.0.186-1533098566550 directory 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-1309332346-172.26.0.186-1533098566550/current
>  2018-07-31 21:42:46,804 [Thread-2252] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(409)) - Scanning block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2...
>  2018-07-31 21:42:46,811 [Thread-2237] INFO common.Storage 
> (BlockPoolSliceStorage.java:recoverTransitionRead(252)) - Analyzing storage 
> directories for bpid BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,811 [Thread-2237] INFO common.Storage 
> (Storage.java:lock(806)) - Locking is disabled for 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,811 [Thread-2237] INFO common.Storage 
> (BlockPoolSliceStorage.java:loadStorageDirectory(168)) - Block pool storage 
> directory for location 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4
>  and block pool id BP-1309332346-172.26.0.186-1533098566550 is not formatted 
> for BP-1309332346-172.26.0.186-1533098566550. Formatting ...
>  2018-07-31 21:42:46,812 [Thread-2237] INFO common.Storage 
> (BlockPoolSliceStorage.java:format(280)) - Formatting block pool 
> BP-1309332346-172.26.0.186-1533098566550 directory 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-1309332346-172.26.0.186-1533098566550/current
>  2018-07-31 21:42:46,813 [Thread-2252] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(414)) - Time taken to scan block pool 
> BP-1309332346-172.26.0.186-1533098566550 on 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2:
>  9ms
>  2018-07-31 21:42:46,816 [Thread-2237] INFO datanode.DataNode 
> (DataNode.java:initStorage(1693)) - Setting up storage: 
> nsid=1027244560;bpid=BP-1309332346-172.26.0.186-1533098566550;lv=-57;nsInfo=lv=-64;cid=testClusterID;nsid=1027244560;c=1533098566550;bpid=BP-1309332346-172.26.0.186-1533098566550;dnuuid=null
>  2018-07-31 21:42:46,817 [Thread-2251] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(414)) - Time taken to scan block pool 
> BP-1309332346-172.26.0.186-1533098566550 on 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1:
>  17ms
>  2018-07-31 21:42:46,817 [Thread-2206] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:addBlockPool(440)) - Total time to scan all replicas for 
> block pool BP-1309332346-172.26.0.186-1533098566550: 18ms
>  2018-07-31 21:42:46,817 [Thread-2255] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(198)) - Adding replicas to map for block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1...
>  2018-07-31 21:42:46,818 [Thread-2255] INFO impl.BlockPoolSlice 
> (BlockPoolSlice.java:readReplicasFromCache(770)) - Replica Cache file: 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-1309332346-172.26.0.186-1533098566550/current/replicas
>  doesn't exist 
>  2018-07-31 21:42:46,818 [Thread-2256] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(198)) - Adding replicas to map for block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2...
>  2018-07-31 21:42:46,818 [Thread-2255] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(203)) - Time to add replicas to map for block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1:
>  0ms
>  2018-07-31 21:42:46,818 [Thread-2256] INFO impl.BlockPoolSlice 
> (BlockPoolSlice.java:readReplicasFromCache(770)) - Replica Cache file: 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-1309332346-172.26.0.186-1533098566550/current/replicas
>  doesn't exist 
>  2018-07-31 21:42:46,818 [Thread-2256] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(203)) - Time to add replicas to map for block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2:
>  0ms
>  2018-07-31 21:42:46,819 [Thread-2206] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:getAllVolumesMap(229)) - Total time to add all replicas to 
> map: 2ms
>  2018-07-31 21:42:46,819 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)]
>  INFO datanode.VolumeScanner 
> (VolumeScanner.java:findNextUsableBlockIter(386)) - Now scanning bpid 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
>  2018-07-31 21:42:46,819 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2)]
>  INFO datanode.VolumeScanner 
> (VolumeScanner.java:findNextUsableBlockIter(386)) - Now scanning bpid 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
>  2018-07-31 21:42:46,819 [Thread-2206] INFO datanode.DirectoryScanner 
> (DirectoryScanner.java:start(281)) - Periodic Directory Tree Verification 
> scan starting at 7/31/18 10:52 PM with interval of 21600000ms
>  2018-07-31 21:42:46,819 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2)]
>  INFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2,
>  DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f): finished scanning block pool 
> BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,819 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)]
>  INFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,
>  DS-1d19d47a-b472-463a-a1ed-43be073676f5): finished scanning block pool 
> BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,820 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPServiceActor.java:register(763)) - Block pool 
> BP-1309332346-172.26.0.186-1533098566550 (Datanode Uuid 
> 785a0dd8-bdae-40c1-94f6-a6b639dafb14) service to localhost/x.x.x.x:port1 
> beginning handshake with NN
>  2018-07-31 21:42:46,821 [IPC Server handler 3 on port1] INFO 
> hdfs.StateChange (DatanodeManager.java:registerDatanode(1038)) - BLOCK* 
> registerDatanode: from DatanodeRegistration(x.x.x.x:port2, 
> datanodeUuid=785a0dd8-bdae-40c1-94f6-a6b639dafb14, infoPort=port3, 
> infoSecurePort=0, ipcPort=port4, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1027244560;c=1533098566550) storage 
> 785a0dd8-bdae-40c1-94f6-a6b639dafb14
>  2018-07-31 21:42:46,821 [IPC Server handler 3 on port1] INFO 
> net.NetworkTopology (NetworkTopology.java:add(145)) - Adding a new node: 
> /default-rack/x.x.x.x:port2
>  2018-07-31 21:42:46,821 [IPC Server handler 3 on port1] INFO 
> blockmanagement.BlockReportLeaseManager 
> (BlockReportLeaseManager.java:registerNode(204)) - Registered DN 
> 785a0dd8-bdae-40c1-94f6-a6b639dafb14 (x.x.x.x:port2).
>  2018-07-31 21:42:46,823 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPServiceActor.java:register(782)) - Block pool Block pool 
> BP-1309332346-172.26.0.186-1533098566550 (Datanode Uuid 
> 785a0dd8-bdae-40c1-94f6-a6b639dafb14) service to localhost/x.x.x.x:port1 
> successfully registered with NN
>  2018-07-31 21:42:46,823 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPServiceActor.java:offerService(612)) - For namenode 
> localhost/x.x.x.x:port1 using BLOCKREPORT_INTERVAL of 21600000msec 
> CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
>  2018-07-31 21:42:46,824 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)]
>  INFO datanode.VolumeScanner 
> (VolumeScanner.java:findNextUsableBlockIter(403)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,
>  DS-1d19d47a-b472-463a-a1ed-43be073676f5): no suitable block pools found to 
> scan. Waiting 1814399995 ms.
>  2018-07-31 21:42:46,824 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2)]
>  INFO datanode.VolumeScanner 
> (VolumeScanner.java:findNextUsableBlockIter(403)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2,
>  DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f): no suitable block pools found to 
> scan. Waiting 1814399995 ms.
>  2018-07-31 21:42:46,825 [IPC Server handler 4 on port1] INFO 
> blockmanagement.DatanodeDescriptor 
> (DatanodeDescriptor.java:updateStorage(949)) - Adding new storage ID 
> DS-1d19d47a-b472-463a-a1ed-43be073676f5 for DN x.x.x.x:port2
>  2018-07-31 21:42:46,825 [IPC Server handler 4 on port1] INFO 
> blockmanagement.DatanodeDescriptor 
> (DatanodeDescriptor.java:updateStorage(949)) - Adding new storage ID 
> DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f for DN x.x.x.x:port2
>  2018-07-31 21:42:46,827 [Block report processor] INFO BlockStateChange 
> (BlockManager.java:processReport(2462)) - BLOCK* processReport 
> 0xa2b93752cc445e4b: Processing first storage report for 
> DS-1d19d47a-b472-463a-a1ed-43be073676f5 from datanode 
> 785a0dd8-bdae-40c1-94f6-a6b639dafb14
>  2018-07-31 21:42:46,828 [Block report processor] INFO BlockStateChange 
> (BlockManager.java:processReport(2488)) - BLOCK* processReport 
> 0xa2b93752cc445e4b: from storage DS-1d19d47a-b472-463a-a1ed-43be073676f5 node 
> DatanodeRegistration(x.x.x.x:port2, 
> datanodeUuid=785a0dd8-bdae-40c1-94f6-a6b639dafb14, infoPort=port3, 
> infoSecurePort=0, ipcPort=port4, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1027244560;c=1533098566550), 
> blocks: 0, hasStaleStorage: true, processing time: 1 msecs, 
> invalidatedBlocks: 0
>  2018-07-31 21:42:46,828 [Block report processor] INFO BlockStateChange 
> (BlockManager.java:processReport(2462)) - BLOCK* processReport 
> 0xa2b93752cc445e4b: Processing first storage report for 
> DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f from datanode 
> 785a0dd8-bdae-40c1-94f6-a6b639dafb14
>  2018-07-31 21:42:46,828 [Thread-2237] INFO datanode.DataNode 
> (DataNode.java:checkDatanodeUuid(1517)) - Generated and persisted new 
> Datanode UUID 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d
>  2018-07-31 21:42:46,828 [Block report processor] INFO BlockStateChange 
> (BlockManager.java:processReport(2488)) - BLOCK* processReport 
> 0xa2b93752cc445e4b: from storage DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f node 
> DatanodeRegistration(x.x.x.x:port2, 
> datanodeUuid=785a0dd8-bdae-40c1-94f6-a6b639dafb14, infoPort=port3, 
> infoSecurePort=0, ipcPort=port4, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1027244560;c=1533098566550), 
> blocks: 0, hasStaleStorage: false, processing time: 0 msecs, 
> invalidatedBlocks: 0
>  2018-07-31 21:42:46,830 [Thread-2237] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:addVolume(306)) - Added new volume: 
> DS-386f4969-f72d-4a3a-baad-ef59d4bc3016
>  2018-07-31 21:42:46,830 [Thread-2237] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:addVolume(431)) - Added volume - 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3,
>  StorageType: DISK
>  2018-07-31 21:42:46,830 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPServiceActor.java:blockReport(422)) - Successfully sent block report 
> 0xa2b93752cc445e4b, containing 2 storage report(s), of which we sent 2. The 
> reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate 
> and 3 msecs for RPC and NN processing. Got back one command: 
> FinalizeCommand/5.
>  2018-07-31 21:42:46,830 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,831 [Thread-2237] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:addVolume(306)) - Added new volume: 
> DS-cac8e9dc-026d-474d-a76b-57d577c44846
>  2018-07-31 21:42:46,831 [Thread-2237] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:addVolume(431)) - Added volume - 
> [DISK]file:/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4,
>  StorageType: DISK
>  2018-07-31 21:42:46,831 [Thread-2237] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:registerMBean(2169)) - Registered FSDatasetState MBean
>  2018-07-31 21:42:46,832 [Thread-2237] INFO checker.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
>  2018-07-31 21:42:46,833 [Thread-2237] INFO checker.DatasetVolumeChecker 
> (DatasetVolumeChecker.java:checkAllVolumes(219)) - Scheduled health check for 
> volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
>  2018-07-31 21:42:46,833 [Thread-2237] INFO checker.ThrottledAsyncChecker 
> (ThrottledAsyncChecker.java:schedule(122)) - Scheduling a check for 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4
>  2018-07-31 21:42:46,833 [Thread-2237] INFO checker.DatasetVolumeChecker 
> (DatasetVolumeChecker.java:checkAllVolumes(219)) - Scheduled health check for 
> volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4
>  2018-07-31 21:42:46,833 [Thread-2237] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:addBlockPool(2655)) - Adding block pool 
> BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,834 [Thread-2262] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(409)) - Scanning block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3...
>  2018-07-31 21:42:46,834 [Thread-2263] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(409)) - Scanning block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4...
>  2018-07-31 21:42:46,840 [Thread-2263] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(414)) - Time taken to scan block pool 
> BP-1309332346-172.26.0.186-1533098566550 on 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4:
>  6ms
>  2018-07-31 21:42:46,842 [Thread-2262] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(414)) - Time taken to scan block pool 
> BP-1309332346-172.26.0.186-1533098566550 on 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3:
>  8ms
>  2018-07-31 21:42:46,842 [Thread-2237] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:addBlockPool(440)) - Total time to scan all replicas for 
> block pool BP-1309332346-172.26.0.186-1533098566550: 9ms
>  2018-07-31 21:42:46,842 [Thread-2266] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(198)) - Adding replicas to map for block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3...
>  2018-07-31 21:42:46,843 [Thread-2266] INFO impl.BlockPoolSlice 
> (BlockPoolSlice.java:readReplicasFromCache(770)) - Replica Cache file: 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-1309332346-172.26.0.186-1533098566550/current/replicas
>  doesn't exist 
>  2018-07-31 21:42:46,843 [Thread-2267] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(198)) - Adding replicas to map for block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4...
>  2018-07-31 21:42:46,843 [Thread-2266] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(203)) - Time to add replicas to map for block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3:
>  0ms
>  2018-07-31 21:42:46,843 [Thread-2267] INFO impl.BlockPoolSlice 
> (BlockPoolSlice.java:readReplicasFromCache(770)) - Replica Cache file: 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-1309332346-172.26.0.186-1533098566550/current/replicas
>  doesn't exist 
>  2018-07-31 21:42:46,843 [Thread-2267] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:run(203)) - Time to add replicas to map for block pool 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4:
>  0ms
>  2018-07-31 21:42:46,843 [Thread-2237] INFO impl.FsDatasetImpl 
> (FsVolumeList.java:getAllVolumesMap(229)) - Total time to add all replicas to 
> map: 1ms
>  2018-07-31 21:42:46,844 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3)]
>  INFO datanode.VolumeScanner 
> (VolumeScanner.java:findNextUsableBlockIter(386)) - Now scanning bpid 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
>  2018-07-31 21:42:46,844 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4)]
>  INFO datanode.VolumeScanner 
> (VolumeScanner.java:findNextUsableBlockIter(386)) - Now scanning bpid 
> BP-1309332346-172.26.0.186-1533098566550 on volume 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4
>  2018-07-31 21:42:46,844 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3)]
>  INFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3,
>  DS-386f4969-f72d-4a3a-baad-ef59d4bc3016): finished scanning block pool 
> BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,844 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4)]
>  INFO datanode.VolumeScanner (VolumeScanner.java:runLoop(544)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4,
>  DS-cac8e9dc-026d-474d-a76b-57d577c44846): finished scanning block pool 
> BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,844 [Thread-2237] INFO datanode.DirectoryScanner 
> (DirectoryScanner.java:start(281)) - Periodic Directory Tree Verification 
> scan starting at 7/31/18 10:08 PM with interval of 21600000ms
>  2018-07-31 21:42:46,844 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3)]
>  INFO datanode.VolumeScanner 
> (VolumeScanner.java:findNextUsableBlockIter(403)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3,
>  DS-386f4969-f72d-4a3a-baad-ef59d4bc3016): no suitable block pools found to 
> scan. Waiting 1814400000 ms.
>  2018-07-31 21:42:46,844 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4)]
>  INFO datanode.VolumeScanner 
> (VolumeScanner.java:findNextUsableBlockIter(403)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4,
>  DS-cac8e9dc-026d-474d-a76b-57d577c44846): no suitable block pools found to 
> scan. Waiting 1814400000 ms.
>  2018-07-31 21:42:46,845 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPServiceActor.java:register(763)) - Block pool 
> BP-1309332346-172.26.0.186-1533098566550 (Datanode Uuid 
> 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d) service to localhost/x.x.x.x:port1 
> beginning handshake with NN
>  2018-07-31 21:42:46,846 [IPC Server handler 6 on port1] INFO 
> hdfs.StateChange (DatanodeManager.java:registerDatanode(1038)) - BLOCK* 
> registerDatanode: from DatanodeRegistration(x.x.x.x:port5, 
> datanodeUuid=48eaddaf-0bf7-47e5-bc33-4a640b4cde9d, infoPort=port6, 
> infoSecurePort=0, ipcPort=port7, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1027244560;c=1533098566550) storage 
> 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d
>  2018-07-31 21:42:46,846 [IPC Server handler 6 on port1] INFO 
> net.NetworkTopology (NetworkTopology.java:add(145)) - Adding a new node: 
> /default-rack/x.x.x.x:port5
>  2018-07-31 21:42:46,846 [IPC Server handler 6 on port1] INFO 
> blockmanagement.BlockReportLeaseManager 
> (BlockReportLeaseManager.java:registerNode(204)) - Registered DN 
> 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d (x.x.x.x:port5).
>  2018-07-31 21:42:46,847 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPServiceActor.java:register(782)) - Block pool Block pool 
> BP-1309332346-172.26.0.186-1533098566550 (Datanode Uuid 
> 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d) service to localhost/x.x.x.x:port1 
> successfully registered with NN
>  2018-07-31 21:42:46,847 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPServiceActor.java:offerService(612)) - For namenode 
> localhost/x.x.x.x:port1 using BLOCKREPORT_INTERVAL of 21600000msec 
> CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
>  2018-07-31 21:42:46,848 [IPC Server handler 7 on port1] INFO 
> blockmanagement.DatanodeDescriptor 
> (DatanodeDescriptor.java:updateStorage(949)) - Adding new storage ID 
> DS-386f4969-f72d-4a3a-baad-ef59d4bc3016 for DN x.x.x.x:port5
>  2018-07-31 21:42:46,850 [IPC Server handler 7 on port1] INFO 
> blockmanagement.DatanodeDescriptor 
> (DatanodeDescriptor.java:updateStorage(949)) - Adding new storage ID 
> DS-cac8e9dc-026d-474d-a76b-57d577c44846 for DN x.x.x.x:port5
>  2018-07-31 21:42:46,852 [Block report processor] INFO BlockStateChange 
> (BlockManager.java:processReport(2462)) - BLOCK* processReport 
> 0xcb3b9727741549e0: Processing first storage report for 
> DS-cac8e9dc-026d-474d-a76b-57d577c44846 from datanode 
> 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d
>  2018-07-31 21:42:46,852 [Block report processor] INFO BlockStateChange 
> (BlockManager.java:processReport(2488)) - BLOCK* processReport 
> 0xcb3b9727741549e0: from storage DS-cac8e9dc-026d-474d-a76b-57d577c44846 node 
> DatanodeRegistration(x.x.x.x:port5, 
> datanodeUuid=48eaddaf-0bf7-47e5-bc33-4a640b4cde9d, infoPort=port6, 
> infoSecurePort=0, ipcPort=port7, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1027244560;c=1533098566550), 
> blocks: 0, hasStaleStorage: true, processing time: 0 msecs, 
> invalidatedBlocks: 0
>  2018-07-31 21:42:46,852 [Block report processor] INFO BlockStateChange 
> (BlockManager.java:processReport(2462)) - BLOCK* processReport 
> 0xcb3b9727741549e0: Processing first storage report for 
> DS-386f4969-f72d-4a3a-baad-ef59d4bc3016 from datanode 
> 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d
>  2018-07-31 21:42:46,852 [Block report processor] INFO BlockStateChange 
> (BlockManager.java:processReport(2488)) - BLOCK* processReport 
> 0xcb3b9727741549e0: from storage DS-386f4969-f72d-4a3a-baad-ef59d4bc3016 node 
> DatanodeRegistration(x.x.x.x:port5, 
> datanodeUuid=48eaddaf-0bf7-47e5-bc33-4a640b4cde9d, infoPort=port6, 
> infoSecurePort=0, ipcPort=port7, 
> storageInfo=lv=-57;cid=testClusterID;nsid=1027244560;c=1533098566550), 
> blocks: 0, hasStaleStorage: false, processing time: 0 msecs, 
> invalidatedBlocks: 0
>  2018-07-31 21:42:46,853 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPServiceActor.java:blockReport(422)) - Successfully sent block report 
> 0xcb3b9727741549e0, containing 2 storage report(s), of which we sent 2. The 
> reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate 
> and 3 msecs for RPC and NN processing. Got back one command: 
> FinalizeCommand/5.
>  2018-07-31 21:42:46,853 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BPOfferService.java:processCommandFromActive(759)) - Got finalize command 
> for block pool BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,878 [IPC Server handler 9 on port1] INFO 
> FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7816)) - allowed=true 
> ugi=jenkins (auth:SIMPLE) ip=/x.x.x.x cmd=datanodeReport src=null dst=null 
> perm=null proto=rpc
>  2018-07-31 21:42:46,878 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitActive(2629)) - Cluster is active
>  2018-07-31 21:42:46,880 [main] DEBUG hdfs.DFSClient 
> (DFSClient.java:<init>(311)) - Sets 
> dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
>  2018-07-31 21:42:46,881 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
>  2018-07-31 21:42:46,881 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdownDataNode(2013)) - Shutting down DataNode 1
>  2018-07-31 21:42:46,881 [main] WARN datanode.DirectoryScanner 
> (DirectoryScanner.java:shutdown(340)) - DirectoryScanner: shutdown has been 
> called
>  2018-07-31 21:42:46,881 
> [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6aba5d30] INFO 
> datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all 
> peers.
>  2018-07-31 21:42:46,882 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3)]
>  INFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3,
>  DS-386f4969-f72d-4a3a-baad-ef59d4bc3016) exiting.
>  2018-07-31 21:42:46,882 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4)]
>  INFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4,
>  DS-cac8e9dc-026d-474d-a76b-57d577c44846) exiting.
>  2018-07-31 21:42:46,887 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped 
> o.e.j.w.WebAppContext@41ffaeb8\{/,null,UNAVAILABLE}{/datanode}
>  2018-07-31 21:42:46,888 [main] INFO server.AbstractConnector 
> (AbstractConnector.java:doStop(318)) - Stopped 
> ServerConnector@43f0c2d1\{HTTP/1.1,[http/1.1]} \{localhost:0}
> 2018-07-31 21:42:46,888 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped 
> o.e.j.s.ServletContextHandler@7acfb656
> {/static,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE}
> 2018-07-31 21:42:46,888 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped 
> o.e.j.s.ServletContextHandler@2c6aed22
> {/logs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE}
> 2018-07-31 21:42:46,890 [main] INFO ipc.Server (Server.java:stop(3074)) - 
> Stopping server on port7
>  2018-07-31 21:42:46,891 [IPC Server listener on port7] INFO ipc.Server 
> (Server.java:run(1178)) - Stopping IPC Server listener on port7
>  2018-07-31 21:42:46,892 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] WARN 
> datanode.IncrementalBlockReportManager 
> (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - 
> IncrementalBlockReportManager interrupted
>  2018-07-31 21:42:46,891 [IPC Server Responder] INFO ipc.Server 
> (Server.java:run(1312)) - Stopping IPC Server Responder
>  2018-07-31 21:42:46,892 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] WARN datanode.DataNode 
> (BPServiceActor.java:run(852)) - Ending block pool service for: Block pool 
> BP-1309332346-172.26.0.186-1533098566550 (Datanode Uuid 
> 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d) service to localhost/x.x.x.x:port1
>  2018-07-31 21:42:46,892 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BlockPoolManager.java:remove(102)) - Removed Block pool 
> BP-1309332346-172.26.0.186-1533098566550 (Datanode Uuid 
> 48eaddaf-0bf7-47e5-bc33-4a640b4cde9d)
>  2018-07-31 21:42:46,892 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:shutdownBlockPool(2666)) - Removing block pool 
> BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:46,893 
> [refreshUsed-/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data4/current/BP-1309332346-172.26.0.186-1533098566550]
>  WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
>  2018-07-31 21:42:46,893 
> [refreshUsed-/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/current/BP-1309332346-172.26.0.186-1533098566550]
>  WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
>  2018-07-31 21:42:46,894 [main] INFO impl.FsDatasetAsyncDiskService 
> (FsDatasetAsyncDiskService.java:shutdown(191)) - Shutting down all async disk 
> service threads
>  2018-07-31 21:42:46,894 [main] INFO impl.FsDatasetAsyncDiskService 
> (FsDatasetAsyncDiskService.java:shutdown(199)) - All async disk service 
> threads have been shut down
>  2018-07-31 21:42:46,895 [main] INFO impl.RamDiskAsyncLazyPersistService 
> (RamDiskAsyncLazyPersistService.java:shutdown(176)) - Shutting down all async 
> lazy persist service threads
>  2018-07-31 21:42:46,895 [main] INFO impl.RamDiskAsyncLazyPersistService 
> (RamDiskAsyncLazyPersistService.java:shutdown(183)) - All async lazy persist 
> service threads have been shut down
>  2018-07-31 21:42:46,895 [main] INFO datanode.DataNode 
> (DataNode.java:shutdown(2113)) - Shutdown complete.
>  2018-07-31 21:42:46,896 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:shutdownDataNode(2013)) - Shutting down DataNode 0
>  2018-07-31 21:42:46,896 [main] WARN datanode.DirectoryScanner 
> (DirectoryScanner.java:shutdown(340)) - DirectoryScanner: shutdown has been 
> called
>  2018-07-31 21:42:46,896 
> [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@74db12c2] INFO 
> datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all 
> peers.
>  2018-07-31 21:42:46,897 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1)]
>  INFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1,
>  DS-1d19d47a-b472-463a-a1ed-43be073676f5) exiting.
>  2018-07-31 21:42:46,897 
> [VolumeScannerThread(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2)]
>  INFO datanode.VolumeScanner (VolumeScanner.java:run(642)) - 
> VolumeScanner(/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2,
>  DS-c307fe32-1d7b-4c82-9d8f-6c43bb6ebe3f) exiting.
>  2018-07-31 21:42:46,901 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@68fe48d7
> {/,null,UNAVAILABLE}\{/datanode}
>  2018-07-31 21:42:46,902 [main] INFO server.AbstractConnector 
> (AbstractConnector.java:doStop(318)) - Stopped 
> ServerConnector@379ce046\{HTTP/1.1,[http/1.1]}{localhost:0}
>  2018-07-31 21:42:46,902 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped 
> o.e.j.s.ServletContextHandler@67594471\{/static,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE}
>  2018-07-31 21:42:46,902 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped 
> o.e.j.s.ServletContextHandler@38b5f25\{/logs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE}
>  2018-07-31 21:42:46,904 [main] INFO ipc.Server (Server.java:stop(3074)) - 
> Stopping server on port4
>  2018-07-31 21:42:46,905 [IPC Server listener on port4] INFO ipc.Server 
> (Server.java:run(1178)) - Stopping IPC Server listener on port4
>  2018-07-31 21:42:46,906 [IPC Server Responder] INFO ipc.Server 
> (Server.java:run(1312)) - Stopping IPC Server Responder
>  2018-07-31 21:42:46,906 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] WARN 
> datanode.IncrementalBlockReportManager 
> (IncrementalBlockReportManager.java:waitTillNextIBR(160)) - 
> IncrementalBlockReportManager interrupted
>  2018-07-31 21:42:46,906 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] WARN datanode.DataNode 
> (BPServiceActor.java:run(852)) - Ending block pool service for: Block pool 
> BP-1309332346-172.26.0.186-1533098566550 (Datanode Uuid 
> 785a0dd8-bdae-40c1-94f6-a6b639dafb14) service to localhost/x.x.x.x:port1
>  2018-07-31 21:42:47,006 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO datanode.DataNode 
> (BlockPoolManager.java:remove(102)) - Removed Block pool 
> BP-1309332346-172.26.0.186-1533098566550 (Datanode Uuid 
> 785a0dd8-bdae-40c1-94f6-a6b639dafb14)
>  2018-07-31 21:42:47,007 [BP-1309332346-172.26.0.186-1533098566550 
> heartbeating to localhost/x.x.x.x:port1] INFO impl.FsDatasetImpl 
> (FsDatasetImpl.java:shutdownBlockPool(2666)) - Removing block pool 
> BP-1309332346-172.26.0.186-1533098566550
>  2018-07-31 21:42:47,007 
> [refreshUsed-/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current/BP-1309332346-172.26.0.186-1533098566550]
>  WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
>  2018-07-31 21:42:47,007 
> [refreshUsed-/tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/current/BP-1309332346-172.26.0.186-1533098566550]
>  WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(183)) - Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
>  2018-07-31 21:42:47,008 [main] INFO impl.FsDatasetAsyncDiskService 
> (FsDatasetAsyncDiskService.java:shutdown(191)) - Shutting down all async disk 
> service threads
>  2018-07-31 21:42:47,008 [main] INFO impl.FsDatasetAsyncDiskService 
> (FsDatasetAsyncDiskService.java:shutdown(199)) - All async disk service 
> threads have been shut down
>  2018-07-31 21:42:47,009 [main] INFO impl.RamDiskAsyncLazyPersistService 
> (RamDiskAsyncLazyPersistService.java:shutdown(176)) - Shutting down all async 
> lazy persist service threads
>  2018-07-31 21:42:47,009 [main] INFO impl.RamDiskAsyncLazyPersistService 
> (RamDiskAsyncLazyPersistService.java:shutdown(183)) - All async lazy persist 
> service threads have been shut down
>  2018-07-31 21:42:47,009 [main] INFO datanode.DataNode 
> (DataNode.java:shutdown(2113)) - Shutdown complete.
>  2018-07-31 21:42:47,010 [main] INFO hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:stopAndJoinNameNode(2046)) - Shutting down the namenode
>  2018-07-31 21:42:47,010 [main] INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1339)) - Stopping services started for 
> active state
>  2018-07-31 21:42:47,010 [main] INFO namenode.FSEditLog 
> (FSEditLog.java:endCurrentLogSegment(1407)) - Ending log segment 1, 1
>  2018-07-31 21:42:47,010 
> [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@263bbfeb]
>  INFO namenode.FSNamesystem (FSNamesystem.java:run(4010)) - 
> NameNodeEditLogRoller was interrupted, exiting
>  2018-07-31 21:42:47,011 
> [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@31edeac]
>  INFO namenode.FSNamesystem (FSNamesystem.java:run(4101)) - 
> LazyPersistFileScrubber was interrupted, exiting
>  2018-07-31 21:42:47,011 [main] INFO namenode.FSEditLog 
> (FSEditLog.java:printStatistics(775)) - Number of transactions: 2 Total time 
> for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of 
> syncs: 3 SyncTimes(ms): 1 1 
>  2018-07-31 21:42:47,011 [main] INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-1/current/edits_inprogress_0000000000000000001
>  -> 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-1/current/edits_0000000000000000001-0000000000000000002
>  2018-07-31 21:42:47,012 [main] INFO namenode.FileJournalManager 
> (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-2/current/edits_inprogress_0000000000000000001
>  -> 
> /tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-2/current/edits_0000000000000000001-0000000000000000002
>  2018-07-31 21:42:47,012 [FSEditLogAsync] INFO namenode.FSEditLog 
> (FSEditLogAsync.java:run(198)) - FSEditLogAsync was interrupted, exiting
>  2018-07-31 21:42:47,013 [CacheReplicationMonitor(1111752355)] INFO 
> blockmanagement.CacheReplicationMonitor 
> (CacheReplicationMonitor.java:run(169)) - Shutting down 
> CacheReplicationMonitor
>  2018-07-31 21:42:47,014 [main] INFO ipc.Server (Server.java:stop(3074)) - 
> Stopping server on port1
>  2018-07-31 21:42:47,015 [IPC Server Responder] INFO ipc.Server 
> (Server.java:run(1312)) - Stopping IPC Server Responder
>  2018-07-31 21:42:47,015 [IPC Server listener on port1] INFO ipc.Server 
> (Server.java:run(1178)) - Stopping IPC Server listener on port1
>  2018-07-31 21:42:47,015 [StorageInfoMonitor] INFO 
> blockmanagement.BlockManager (BlockManager.java:run(4479)) - Stopping thread.
>  2018-07-31 21:42:47,017 [RedundancyMonitor] INFO 
> blockmanagement.BlockManager (BlockManager.java:run(4444)) - Stopping 
> RedundancyMonitor.
>  2018-07-31 21:42:47,026 [main] INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopActiveServices(1339)) - Stopping services started for 
> active state
>  2018-07-31 21:42:47,026 [main] INFO namenode.FSNamesystem 
> (FSNamesystem.java:stopStandbyServices(1435)) - Stopping services started for 
> standby state
>  2018-07-31 21:42:47,028 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped 
> o.e.j.w.WebAppContext@1c758545\{/,null,UNAVAILABLE} \{/hdfs}
> 2018-07-31 21:42:47,029 [main] INFO server.AbstractConnector 
> (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@117bcfdc
> {HTTP/1.1,[http/1.1]} \{localhost:0}
> 2018-07-31 21:42:47,029 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped 
> o.e.j.s.ServletContextHandler@6f4ade6e
> {/static,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/webapps/static/,UNAVAILABLE}
> 2018-07-31 21:42:47,029 [main] INFO handler.ContextHandler 
> (ContextHandler.java:doStop(910)) - Stopped 
> o.e.j.s.ServletContextHandler@514cd540
> {/logs,file:///tmp/tmp.u8GhlLcdks/src/CDH/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/log/,UNAVAILABLE}
> 2018-07-31 21:42:47,030 [main] INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:stop(210)) - Stopping DataNode metrics system...
>  2018-07-31 21:42:47,031 [main] INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:stop(216)) - DataNode metrics system stopped.
>  2018-07-31 21:42:47,031 [main] INFO impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:shutdown(607)) - DataNode metrics system shutdown 
> complete.
> {noformat}
>  
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to