[
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382288#comment-15382288
]
Rushabh S Shah commented on HDFS-10641:
---------------------------------------
Copying the stack trace and the test logs in the jira just in case if the build
#9996 gets removed and if its only failing intermittently.
{noformat}
Stacktrace
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
Standard Output
2016-07-13 23:58:32,591 [main] INFO blockmanagement.DatanodeManager
(DatanodeManager.java:<init>(249)) - dfs.block.invalidate.limit=1000
2016-07-13 23:58:32,591 [main] INFO blockmanagement.DatanodeManager
(DatanodeManager.java:<init>(255)) -
dfs.namenode.datanode.registration.ip-hostname-check=true
2016-07-13 23:58:32,591 [main] INFO blockmanagement.BlockManager
(InvalidateBlocks.java:printBlockDeletionTime(72)) -
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-07-13 23:58:32,591 [main] INFO blockmanagement.BlockManager
(InvalidateBlocks.java:printBlockDeletionTime(77)) - The block deletion will
start around 2016 Jul 13 23:58:32
2016-07-13 23:58:32,591 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
BlocksMap
2016-07-13 23:58:32,591 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,592 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-07-13 23:58:32,592 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^22 = 4194304
entries
2016-07-13 23:58:32,597 [main] INFO blockmanagement.BlockManager
(BlockManager.java:createBlockTokenSecretManager(454)) -
dfs.block.access.token.enable=false
2016-07-13 23:58:32,597 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(150)) - dfs.namenode.safemode.threshold-pct =
0.9990000128746033
2016-07-13 23:58:32,597 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-07-13 23:58:32,598 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(153)) - dfs.namenode.safemode.extension =
30000
2016-07-13 23:58:32,598 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(440)) - defaultReplication = 3
2016-07-13 23:58:32,598 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(441)) - maxReplication = 512
2016-07-13 23:58:32,598 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(442)) - minReplication = 1
2016-07-13 23:58:32,598 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(443)) - maxReplicationStreams = 2
2016-07-13 23:58:32,598 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(444)) - replicationRecheckInterval = 3000
2016-07-13 23:58:32,598 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(445)) - encryptDataTransfer = false
2016-07-13 23:58:32,598 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(446)) - maxNumBlocksToLog = 1000
2016-07-13 23:58:32,598 [main] INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s6 for DN
6.6.6.6:9866
2016-07-13 23:58:32,599 [main] INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s5 for DN
5.5.5.5:9866
2016-07-13 23:58:32,599 [main] INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s4 for DN
4.4.4.4:9866
2016-07-13 23:58:32,599 [main] INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s3 for DN
3.3.3.3:9866
2016-07-13 23:58:32,599 [main] INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s2 for DN
2.2.2.2:9866
2016-07-13 23:58:32,599 [main] INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateStorage(911)) - Adding new storage ID s1 for DN
1.1.1.1:9866
2016-07-13 23:58:32,613 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:<init>(465)) - starting cluster: numNameNodes=1,
numDataNodes=1
Formatting using clusterid: testClusterID
2016-07-13 23:58:32,616 [main] INFO namenode.FSEditLog
(FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-07-13 23:58:32,616 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(703)) - KeyProvider: null
2016-07-13 23:58:32,617 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(710)) - fsLock is fair:true
2016-07-13 23:58:32,617 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(726)) - fsOwner = jenkins (auth:SIMPLE)
2016-07-13 23:58:32,617 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(727)) - supergroup = supergroup
2016-07-13 23:58:32,617 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(728)) - isPermissionEnabled = true
2016-07-13 23:58:32,617 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(739)) - HA Enabled: false
2016-07-13 23:58:32,618 [main] INFO blockmanagement.DatanodeManager
(DatanodeManager.java:<init>(249)) - dfs.block.invalidate.limit=1000
2016-07-13 23:58:32,618 [main] INFO blockmanagement.DatanodeManager
(DatanodeManager.java:<init>(255)) -
dfs.namenode.datanode.registration.ip-hostname-check=true
2016-07-13 23:58:32,618 [main] INFO blockmanagement.BlockManager
(InvalidateBlocks.java:printBlockDeletionTime(72)) -
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-07-13 23:58:32,619 [main] INFO blockmanagement.BlockManager
(InvalidateBlocks.java:printBlockDeletionTime(77)) - The block deletion will
start around 2016 Jul 13 23:58:32
2016-07-13 23:58:32,619 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
BlocksMap
2016-07-13 23:58:32,619 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,619 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-07-13 23:58:32,619 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^22 = 4194304
entries
2016-07-13 23:58:32,624 [main] INFO blockmanagement.BlockManager
(BlockManager.java:createBlockTokenSecretManager(454)) -
dfs.block.access.token.enable=false
2016-07-13 23:58:32,625 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(150)) - dfs.namenode.safemode.threshold-pct =
0.9990000128746033
2016-07-13 23:58:32,625 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-07-13 23:58:32,625 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(153)) - dfs.namenode.safemode.extension = 0
2016-07-13 23:58:32,625 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(440)) - defaultReplication = 1
2016-07-13 23:58:32,625 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(441)) - maxReplication = 512
2016-07-13 23:58:32,625 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(442)) - minReplication = 1
2016-07-13 23:58:32,626 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(443)) - maxReplicationStreams = 2
2016-07-13 23:58:32,626 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(444)) - replicationRecheckInterval = 3000
2016-07-13 23:58:32,626 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(445)) - encryptDataTransfer = false
2016-07-13 23:58:32,626 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(446)) - maxNumBlocksToLog = 1000
2016-07-13 23:58:32,626 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
INodeMap
2016-07-13 23:58:32,626 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,627 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB
2016-07-13 23:58:32,627 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152
entries
2016-07-13 23:58:32,630 [main] INFO namenode.FSDirectory
(FSDirectory.java:<init>(246)) - ACLs enabled? false
2016-07-13 23:58:32,630 [main] INFO namenode.FSDirectory
(FSDirectory.java:<init>(250)) - XAttrs enabled? true
2016-07-13 23:58:32,630 [main] INFO namenode.NameNode
(FSDirectory.java:<init>(314)) - Caching file names occuring more than 10 times
2016-07-13 23:58:32,630 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
cachedBlocks
2016-07-13 23:58:32,630 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,631 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB
2016-07-13 23:58:32,631 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^19 = 524288
entries
2016-07-13 23:58:32,632 [main] INFO metrics.TopMetrics
(TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets
= 10
2016-07-13 23:58:32,632 [main] INFO metrics.TopMetrics
(TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
2016-07-13 23:58:32,632 [main] INFO metrics.TopMetrics
(TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes =
1,5,25
2016-07-13 23:58:32,632 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:initRetryCache(914)) - Retry cache on namenode is enabled
2016-07-13 23:58:32,632 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:initRetryCache(922)) - Retry cache will use 0.03 of total
heap and retry cache entry expiry time is 600000 millis
2016-07-13 23:58:32,633 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
NameNodeRetryCache
2016-07-13 23:58:32,633 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,633 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory
1.8 GB = 559.3 KB
2016-07-13 23:58:32,633 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^16 = 65536
entries
2016-07-13 23:58:32,634 [main] INFO namenode.FSImage
(FSImage.java:format(154)) - Allocated new BlockPoolId:
BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:32,637 [main] INFO common.Storage
(NNStorage.java:format(577)) - Storage directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1
has been successfully formatted.
2016-07-13 23:58:32,639 [main] INFO common.Storage
(NNStorage.java:format(577)) - Storage directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2
has been successfully formatted.
2016-07-13 23:58:32,641 [FSImageSaver for
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1
of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf
(FSImageFormatProtobuf.java:save(429)) - Saving image file
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000
using no compression
2016-07-13 23:58:32,641 [FSImageSaver for
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2
of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf
(FSImageFormatProtobuf.java:save(429)) - Saving image file
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000
using no compression
2016-07-13 23:58:32,652 [FSImageSaver for
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1
of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf
(FSImageFormatProtobuf.java:save(432)) - Image file
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000
of size 365 bytes saved in 0 seconds.
2016-07-13 23:58:32,652 [FSImageSaver for
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2
of type IMAGE_AND_EDITS] INFO namenode.FSImageFormatProtobuf
(FSImageFormatProtobuf.java:save(432)) - Image file
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000
of size 365 bytes saved in 0 seconds.
2016-07-13 23:58:32,655 [main] INFO namenode.NNStorageRetentionManager
(NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to retain 1
images with txid >= 0
2016-07-13 23:58:32,658 [main] INFO namenode.NameNode
(NameNode.java:createNameNode(1556)) - createNameNode []
2016-07-13 23:58:32,660 [main] INFO impl.MetricsConfig
(MetricsConfig.java:loadFirst(115)) - loaded properties from
hadoop-metrics2.properties
2016-07-13 23:58:32,662 [main] INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 0
second(s).
2016-07-13 23:58:32,662 [main] INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:start(192)) - NameNode metrics system started
2016-07-13 23:58:32,663 [main] INFO namenode.NameNode
(NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is
hdfs://127.0.0.1:0
2016-07-13 23:58:32,668
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@2532b351] INFO
util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause
monitor
2016-07-13 23:58:32,668 [main] INFO hdfs.DFSUtil
(DFSUtil.java:httpServerTemplateForNNAndJN(1491)) - Starting Web-server for
hdfs at: http://localhost:0
2016-07-13 23:58:32,670 [main] INFO server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(246)) - Unable to initialize
FileSignerSecretProvider, falling back to use random secrets.
2016-07-13 23:58:32,670 [main] INFO http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.namenode is not defined
2016-07-13 23:58:32,671 [main] INFO http.HttpServer2
(HttpServer2.java:addGlobalFilter(746)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-07-13 23:58:32,672 [main] INFO http.HttpServer2
(HttpServer2.java:addFilter(721)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context hdfs
2016-07-13 23:58:32,672 [main] INFO http.HttpServer2
(HttpServer2.java:addFilter(729)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2016-07-13 23:58:32,672 [main] INFO http.HttpServer2
(HttpServer2.java:addFilter(729)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2016-07-13 23:58:32,673 [main] INFO http.HttpServer2
(NameNodeHttpServer.java:initWebHdfs(94)) - Added filter
'org.apache.hadoop.hdfs.web.AuthFilter'
(class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-07-13 23:58:32,674 [main] INFO http.HttpServer2
(HttpServer2.java:addJerseyResourcePackage(645)) - addJerseyResourcePackage:
packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2016-07-13 23:58:32,674 [main] INFO http.HttpServer2
(HttpServer2.java:openListeners(951)) - Jetty bound to port 58929
2016-07-13 23:58:32,674 [main] INFO mortbay.log (Slf4jLog.java:info(67)) -
jetty-6.1.26
2016-07-13 23:58:32,700 [main] INFO mortbay.log (Slf4jLog.java:info(67)) -
Started SelectChannelConnector@localhost:58929
2016-07-13 23:58:32,703 [main] INFO namenode.FSEditLog
(FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-07-13 23:58:32,704 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(703)) - KeyProvider: null
2016-07-13 23:58:32,704 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(710)) - fsLock is fair:true
2016-07-13 23:58:32,704 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(726)) - fsOwner = jenkins (auth:SIMPLE)
2016-07-13 23:58:32,704 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(727)) - supergroup = supergroup
2016-07-13 23:58:32,704 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(728)) - isPermissionEnabled = true
2016-07-13 23:58:32,705 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(739)) - HA Enabled: false
2016-07-13 23:58:32,706 [main] INFO blockmanagement.DatanodeManager
(DatanodeManager.java:<init>(249)) - dfs.block.invalidate.limit=1000
2016-07-13 23:58:32,707 [main] INFO blockmanagement.DatanodeManager
(DatanodeManager.java:<init>(255)) -
dfs.namenode.datanode.registration.ip-hostname-check=true
2016-07-13 23:58:32,707 [main] INFO blockmanagement.BlockManager
(InvalidateBlocks.java:printBlockDeletionTime(72)) -
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-07-13 23:58:32,708 [main] INFO blockmanagement.BlockManager
(InvalidateBlocks.java:printBlockDeletionTime(77)) - The block deletion will
start around 2016 Jul 13 23:58:32
2016-07-13 23:58:32,708 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
BlocksMap
2016-07-13 23:58:32,708 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,708 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-07-13 23:58:32,709 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^22 = 4194304
entries
2016-07-13 23:58:32,783 [main] INFO blockmanagement.BlockManager
(BlockManager.java:createBlockTokenSecretManager(454)) -
dfs.block.access.token.enable=false
2016-07-13 23:58:32,785 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(150)) - dfs.namenode.safemode.threshold-pct =
0.9990000128746033
2016-07-13 23:58:32,785 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-07-13 23:58:32,785 [main] INFO blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:<init>(153)) - dfs.namenode.safemode.extension = 0
2016-07-13 23:58:32,785 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(440)) - defaultReplication = 1
2016-07-13 23:58:32,786 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(441)) - maxReplication = 512
2016-07-13 23:58:32,786 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(442)) - minReplication = 1
2016-07-13 23:58:32,786 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(443)) - maxReplicationStreams = 2
2016-07-13 23:58:32,786 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(444)) - replicationRecheckInterval = 3000
2016-07-13 23:58:32,786 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(445)) - encryptDataTransfer = false
2016-07-13 23:58:32,787 [main] INFO blockmanagement.BlockManager
(BlockManager.java:<init>(446)) - maxNumBlocksToLog = 1000
2016-07-13 23:58:32,788 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
INodeMap
2016-07-13 23:58:32,788 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,789 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB
2016-07-13 23:58:32,790 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152
entries
2016-07-13 23:58:32,793 [main] INFO namenode.FSDirectory
(FSDirectory.java:<init>(246)) - ACLs enabled? false
2016-07-13 23:58:32,794 [main] INFO namenode.FSDirectory
(FSDirectory.java:<init>(250)) - XAttrs enabled? true
2016-07-13 23:58:32,795 [main] INFO namenode.NameNode
(FSDirectory.java:<init>(314)) - Caching file names occuring more than 10 times
2016-07-13 23:58:32,795 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
cachedBlocks
2016-07-13 23:58:32,795 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,795 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB
2016-07-13 23:58:32,796 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^19 = 524288
entries
2016-07-13 23:58:32,797 [main] INFO metrics.TopMetrics
(TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets
= 10
2016-07-13 23:58:32,797 [main] INFO metrics.TopMetrics
(TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
2016-07-13 23:58:32,797 [main] INFO metrics.TopMetrics
(TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes =
1,5,25
2016-07-13 23:58:32,797 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:initRetryCache(914)) - Retry cache on namenode is enabled
2016-07-13 23:58:32,797 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:initRetryCache(922)) - Retry cache will use 0.03 of total
heap and retry cache entry expiry time is 600000 millis
2016-07-13 23:58:32,797 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map
NameNodeRetryCache
2016-07-13 23:58:32,798 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2016-07-13 23:58:32,798 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory
1.8 GB = 559.3 KB
2016-07-13 23:58:32,798 [main] INFO util.GSet
(LightWeightGSet.java:computeCapacity(402)) - capacity = 2^16 = 65536
entries
2016-07-13 23:58:32,800 [main] INFO common.Storage (Storage.java:tryLock(789))
- Lock on
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1/in_use.lock
acquired by nodename 8054@081c53604a3b
2016-07-13 23:58:32,801 [main] INFO common.Storage (Storage.java:tryLock(789))
- Lock on
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2/in_use.lock
acquired by nodename 8054@081c53604a3b
2016-07-13 23:58:32,803 [main] INFO namenode.FileJournalManager
(FileJournalManager.java:recoverUnfinalizedSegments(389)) - Recovering
unfinalized segments in
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1/current
2016-07-13 23:58:32,803 [main] INFO namenode.FileJournalManager
(FileJournalManager.java:recoverUnfinalizedSegments(389)) - Recovering
unfinalized segments in
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2/current
2016-07-13 23:58:32,803 [main] INFO namenode.FSImage
(FSImage.java:loadFSImage(665)) - No edit log streams selected.
2016-07-13 23:58:32,803 [main] INFO namenode.FSImage
(FSImage.java:loadFSImageFile(730)) - Planning to load image:
FSImageFile(file=/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1/current/fsimage_0000000000000000000,
cpktTxId=0000000000000000000)
2016-07-13 23:58:32,804 [main] INFO namenode.FSImageFormatPBINode
(FSImageFormatPBINode.java:loadINodeSection(265)) - Loading 1 INodes.
2016-07-13 23:58:32,805 [main] INFO namenode.FSImageFormatProtobuf
(FSImageFormatProtobuf.java:load(185)) - Loaded FSImage in 0 seconds.
2016-07-13 23:58:32,806 [main] INFO namenode.FSImage
(FSImage.java:loadFSImage(896)) - Loaded image for txid 0 from
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1/current/fsimage_0000000000000000000
2016-07-13 23:58:32,806 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:loadFSImage(1021)) - Need to save fs image? false
(staleImage=false, haEnabled=false, isRollingUpgrade=false)
2016-07-13 23:58:32,806 [main] INFO namenode.FSEditLog
(FSEditLog.java:startLogSegment(1273)) - Starting log segment at 1
2016-07-13 23:58:32,811 [main] INFO namenode.NameCache
(NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
2016-07-13 23:58:32,812 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:loadFromDisk(675)) - Finished loading FSImage in 12 msecs
2016-07-13 23:58:32,812 [main] INFO namenode.NameNode
(NameNodeRpcServer.java:<init>(427)) - RPC server is binding to localhost:0
2016-07-13 23:58:32,812 [main] INFO ipc.CallQueueManager
(CallQueueManager.java:<init>(75)) - Using callQueue: class
java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class
org.apache.hadoop.ipc.DefaultRpcScheduler
2016-07-13 23:58:32,813 [Socket Reader #1 for port 53748] INFO ipc.Server
(Server.java:run(825)) - Starting Socket Reader #1 for port 53748
2016-07-13 23:58:32,817 [main] INFO namenode.NameNode
(NameNode.java:initialize(723)) - Clients are to use localhost:53748 to access
this namenode/service.
2016-07-13 23:58:32,817 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:registerMBean(4596)) - Registered FSNamesystemState MBean
2016-07-13 23:58:32,835 [main] INFO namenode.LeaseManager
(LeaseManager.java:getNumUnderConstructionBlocks(126)) - Number of blocks under
construction: 0
2016-07-13 23:58:32,839 [main] WARN blockmanagement.BlockManagerSafeMode
(BlockManagerSafeMode.java:leaveSafeMode(366)) - forceExit used when normal
exist would suffice. Treating force exit as normal safe mode exit.
2016-07-13 23:58:32,839 [main] INFO blockmanagement.BlockManager
(BlockManager.java:initializeReplQueues(4413)) - initializing replication queues
2016-07-13 23:58:32,839 [main] INFO hdfs.StateChange
(BlockManagerSafeMode.java:leaveSafeMode(383)) - STATE* Leaving safe mode after
0 secs
2016-07-13 23:58:32,840 [main] INFO hdfs.StateChange
(BlockManagerSafeMode.java:leaveSafeMode(389)) - STATE* Network topology has 0
racks and 0 datanodes
2016-07-13 23:58:32,840 [main] INFO hdfs.StateChange
(BlockManagerSafeMode.java:leaveSafeMode(391)) - STATE* UnderReplicatedBlocks
has 0 blocks
2016-07-13 23:58:32,844 [Reconstruction Queue Initializer] INFO
blockmanagement.BlockManager
(BlockManager.java:processMisReplicatesAsync(3160)) - Total number of blocks
= 0
2016-07-13 23:58:32,844 [Reconstruction Queue Initializer] INFO
blockmanagement.BlockManager
(BlockManager.java:processMisReplicatesAsync(3161)) - Number of invalid blocks
= 0
2016-07-13 23:58:32,844 [Reconstruction Queue Initializer] INFO
blockmanagement.BlockManager
(BlockManager.java:processMisReplicatesAsync(3162)) - Number of
under-replicated blocks = 0
2016-07-13 23:58:32,845 [Reconstruction Queue Initializer] INFO
blockmanagement.BlockManager
(BlockManager.java:processMisReplicatesAsync(3163)) - Number of
over-replicated blocks = 0
2016-07-13 23:58:32,845 [Reconstruction Queue Initializer] INFO
blockmanagement.BlockManager
(BlockManager.java:processMisReplicatesAsync(3165)) - Number of blocks being
written = 0
2016-07-13 23:58:32,845 [Reconstruction Queue Initializer] INFO
hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3167)) - STATE*
Replication Queue initialization scan for invalid, over- and under-replicated
blocks completed in 5 msec
2016-07-13 23:58:32,847 [IPC Server Responder] INFO ipc.Server
(Server.java:run(1059)) - IPC Server Responder: starting
2016-07-13 23:58:32,847 [IPC Server listener on 53748] INFO ipc.Server
(Server.java:run(895)) - IPC Server listener on 53748: starting
2016-07-13 23:58:32,853 [main] INFO namenode.NameNode
(NameNode.java:startCommonServices(809)) - NameNode RPC up at:
localhost/127.0.0.1:53748
2016-07-13 23:58:32,853 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:startActiveServices(1130)) - Starting services required for
active state
2016-07-13 23:58:32,853 [main] INFO namenode.FSDirectory
(FSDirectory.java:updateCountForQuota(605)) - Initializing quota with 4
thread(s)
2016-07-13 23:58:32,854 [main] INFO namenode.FSDirectory
(FSDirectory.java:updateCountForQuota(614)) - Quota initialization completed in
0 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
2016-07-13 23:58:32,857 [CacheReplicationMonitor(1385212060)] INFO
blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(161))
- Starting CacheReplicationMonitor with interval 30000 milliseconds
2016-07-13 23:58:32,861 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:startDataNodes(1574)) - Starting DataNode 0 with
dfs.datanode.data.dir:
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2
2016-07-13 23:58:32,874 [main] INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:init(159)) - DataNode metrics system started (again)
2016-07-13 23:58:32,874 [main] INFO datanode.BlockScanner
(BlockScanner.java:<init>(172)) - Initialized block scanner with
targetBytesPerSec 1048576
2016-07-13 23:58:32,875 [main] INFO datanode.DataNode
(DataNode.java:<init>(487)) - Configured hostname is 127.0.0.1
2016-07-13 23:58:32,875 [main] INFO datanode.DataNode
(DataNode.java:startDataNode(1320)) - Starting DataNode with maxLockedMemory = 0
2016-07-13 23:58:32,875 [main] INFO datanode.DataNode
(DataNode.java:initDataXceiver(1077)) - Opened streaming server at
/127.0.0.1:33522
2016-07-13 23:58:32,876 [main] INFO datanode.DataNode
(DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 10485760 bytes/s
2016-07-13 23:58:32,876 [main] INFO datanode.DataNode
(DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
2016-07-13 23:58:32,878 [main] INFO server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(246)) - Unable to initialize
FileSignerSecretProvider, falling back to use random secrets.
2016-07-13 23:58:32,879 [main] INFO http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.datanode is not defined
2016-07-13 23:58:32,879 [main] INFO http.HttpServer2
(HttpServer2.java:addGlobalFilter(746)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-07-13 23:58:32,880 [main] INFO http.HttpServer2
(HttpServer2.java:addFilter(721)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context datanode
2016-07-13 23:58:32,880 [main] INFO http.HttpServer2
(HttpServer2.java:addFilter(729)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2016-07-13 23:58:32,881 [main] INFO http.HttpServer2
(HttpServer2.java:addFilter(729)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2016-07-13 23:58:32,881 [main] INFO http.HttpServer2
(HttpServer2.java:openListeners(951)) - Jetty bound to port 42084
2016-07-13 23:58:32,881 [main] INFO mortbay.log (Slf4jLog.java:info(67)) -
jetty-6.1.26
2016-07-13 23:58:32,906 [main] INFO mortbay.log (Slf4jLog.java:info(67)) -
Started SelectChannelConnector@localhost:42084
2016-07-13 23:58:32,913 [main] INFO web.DatanodeHttpServer
(DatanodeHttpServer.java:start(231)) - Listening HTTP traffic on
/127.0.0.1:48106
2016-07-13 23:58:32,914
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@7bb35cc6] INFO
util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause
monitor
2016-07-13 23:58:32,914 [main] INFO datanode.DataNode
(DataNode.java:startDataNode(1347)) - dnUserName = jenkins
2016-07-13 23:58:32,914 [main] INFO datanode.DataNode
(DataNode.java:startDataNode(1348)) - supergroup = supergroup
2016-07-13 23:58:32,915 [main] INFO ipc.CallQueueManager
(CallQueueManager.java:<init>(75)) - Using callQueue: class
java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class
org.apache.hadoop.ipc.DefaultRpcScheduler
2016-07-13 23:58:32,915 [Socket Reader #1 for port 47206] INFO ipc.Server
(Server.java:run(825)) - Starting Socket Reader #1 for port 47206
2016-07-13 23:58:32,918 [main] INFO datanode.DataNode
(DataNode.java:initIpcServer(964)) - Opened IPC server at /127.0.0.1:47206
2016-07-13 23:58:32,921 [main] INFO datanode.DataNode
(BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for
nameservices: null
2016-07-13 23:58:32,922 [main] INFO datanode.DataNode
(BlockPoolManager.java:doRefreshNamenodes(198)) - Starting BPOfferServices for
nameservices: <default>
2016-07-13 23:58:32,924 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BPServiceActor.java:run(729)) - Block pool <registering> (Datanode Uuid
unassigned) service to localhost/127.0.0.1:53748 starting to offer service
2016-07-13 23:58:32,927 [IPC Server Responder] INFO ipc.Server
(Server.java:run(1059)) - IPC Server Responder: starting
2016-07-13 23:58:32,927 [IPC Server listener on 47206] INFO ipc.Server
(Server.java:run(895)) - IPC Server listener on 47206: starting
2016-07-13 23:58:32,936 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:shouldWait(2567)) - dnInfo.length != numDataNodes
2016-07-13 23:58:32,937 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(DataStorage.java:getParallelVolumeLoadThreadsNum(355)) - Using 2 threads to
upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2,
dataDirs=2)
2016-07-13 23:58:32,937 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:waitActive(2488)) - Waiting for cluster to become active
2016-07-13 23:58:32,939 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(Storage.java:tryLock(789)) - Lock on
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/in_use.lock
acquired by nodename 8054@081c53604a3b
2016-07-13 23:58:32,939 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(DataStorage.java:loadStorageDirectory(280)) - Storage directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1
is not formatted for namespace 1350066422. Formatting...
2016-07-13 23:58:32,940 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(DataStorage.java:createStorageID(158)) - Generated new storageID
DS-81db5345-3d1b-4408-b503-17a174d127e6 for directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1
2016-07-13 23:58:32,942 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(Storage.java:tryLock(789)) - Lock on
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/in_use.lock
acquired by nodename 8054@081c53604a3b
2016-07-13 23:58:32,942 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(DataStorage.java:loadStorageDirectory(280)) - Storage directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2
is not formatted for namespace 1350066422. Formatting...
2016-07-13 23:58:32,942 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(DataStorage.java:createStorageID(158)) - Generated new storageID
DS-1f9e5e15-ac05-4c41-b5a3-abd3a7bd22ec for directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2
2016-07-13 23:58:32,964 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(BlockPoolSliceStorage.java:recoverTransitionRead(249)) - Analyzing storage
directories for bpid BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:32,964 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(Storage.java:lock(748)) - Locking is disabled for
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current/BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:32,964 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(BlockPoolSliceStorage.java:loadStorageDirectory(164)) - Block pool storage
directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current/BP-1728296442-172.17.0.2-1468454312634
is not formatted for BP-1728296442-172.17.0.2-1468454312634. Formatting ...
2016-07-13 23:58:32,964 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(BlockPoolSliceStorage.java:format(277)) - Formatting block pool
BP-1728296442-172.17.0.2-1468454312634 directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current/BP-1728296442-172.17.0.2-1468454312634/current
2016-07-13 23:58:32,981 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(BlockPoolSliceStorage.java:recoverTransitionRead(249)) - Analyzing storage
directories for bpid BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:32,982 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(Storage.java:lock(748)) - Locking is disabled for
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current/BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:32,982 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(BlockPoolSliceStorage.java:loadStorageDirectory(164)) - Block pool storage
directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current/BP-1728296442-172.17.0.2-1468454312634
is not formatted for BP-1728296442-172.17.0.2-1468454312634. Formatting ...
2016-07-13 23:58:32,982 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO common.Storage
(BlockPoolSliceStorage.java:format(277)) - Formatting block pool
BP-1728296442-172.17.0.2-1468454312634 directory
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current/BP-1728296442-172.17.0.2-1468454312634/current
2016-07-13 23:58:32,984 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(DataNode.java:initStorage(1618)) - Setting up storage:
nsid=1350066422;bpid=BP-1728296442-172.17.0.2-1468454312634;lv=-57;nsInfo=lv=-64;cid=testClusterID;nsid=1350066422;c=1468454312634;bpid=BP-1728296442-172.17.0.2-1468454312634;dnuuid=null
2016-07-13 23:58:32,985 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(DataNode.java:checkDatanodeUuid(1445)) - Generated and persisted new Datanode
UUID 32c80269-83e0-49a2-8f56-a7bdf2ae9d1a
2016-07-13 23:58:32,986 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsVolumeList.java:addVolume(313)) - Added new volume:
DS-81db5345-3d1b-4408-b503-17a174d127e6
2016-07-13 23:58:32,986 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsDatasetImpl.java:addVolume(415)) - Added volume -
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current,
StorageType: DISK
2016-07-13 23:58:32,988 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsVolumeList.java:addVolume(313)) - Added new volume:
DS-1f9e5e15-ac05-4c41-b5a3-abd3a7bd22ec
2016-07-13 23:58:32,988 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsDatasetImpl.java:addVolume(415)) - Added volume -
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current,
StorageType: DISK
2016-07-13 23:58:32,994 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsDatasetImpl.java:registerMBean(2211)) - Registered FSDatasetState MBean
2016-07-13 23:58:33,000 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
2016-07-13 23:58:33,000 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsDatasetImpl.java:addBlockPool(2686)) - Adding block pool
BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:33,001 [Thread-201] INFO impl.FsDatasetImpl
(FsVolumeList.java:run(410)) - Scanning block pool
BP-1728296442-172.17.0.2-1468454312634 on volume
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current...
2016-07-13 23:58:33,001 [Thread-202] INFO impl.FsDatasetImpl
(FsVolumeList.java:run(410)) - Scanning block pool
BP-1728296442-172.17.0.2-1468454312634 on volume
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current...
2016-07-13 23:58:33,008 [Thread-201] INFO impl.FsDatasetImpl
(FsVolumeList.java:run(415)) - Time taken to scan block pool
BP-1728296442-172.17.0.2-1468454312634 on
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current:
7ms
2016-07-13 23:58:33,010 [Thread-202] INFO impl.FsDatasetImpl
(FsVolumeList.java:run(415)) - Time taken to scan block pool
BP-1728296442-172.17.0.2-1468454312634 on
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current:
8ms
2016-07-13 23:58:33,010 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsVolumeList.java:addBlockPool(441)) - Total time to scan all replicas for
block pool BP-1728296442-172.17.0.2-1468454312634: 10ms
2016-07-13 23:58:33,010 [Thread-205] INFO impl.FsDatasetImpl
(FsVolumeList.java:run(189)) - Adding replicas to map for block pool
BP-1728296442-172.17.0.2-1468454312634 on volume
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current...
2016-07-13 23:58:33,010 [Thread-205] INFO impl.BlockPoolSlice
(BlockPoolSlice.java:readReplicasFromCache(752)) - Replica Cache file:
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current/BP-1728296442-172.17.0.2-1468454312634/current/replicas
doesn't exist
2016-07-13 23:58:33,011 [Thread-205] INFO impl.FsDatasetImpl
(FsVolumeList.java:run(194)) - Time to add replicas to map for block pool
BP-1728296442-172.17.0.2-1468454312634 on volume
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current:
0ms
2016-07-13 23:58:33,011 [Thread-206] INFO impl.FsDatasetImpl
(FsVolumeList.java:run(189)) - Adding replicas to map for block pool
BP-1728296442-172.17.0.2-1468454312634 on volume
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current...
2016-07-13 23:58:33,011 [Thread-206] INFO impl.BlockPoolSlice
(BlockPoolSlice.java:readReplicasFromCache(752)) - Replica Cache file:
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current/BP-1728296442-172.17.0.2-1468454312634/current/replicas
doesn't exist
2016-07-13 23:58:33,011 [Thread-206] INFO impl.FsDatasetImpl
(FsVolumeList.java:run(194)) - Time to add replicas to map for block pool
BP-1728296442-172.17.0.2-1468454312634 on volume
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current:
0ms
2016-07-13 23:58:33,011 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsVolumeList.java:getAllVolumesMap(220)) - Total time to add all replicas to
map: 1ms
2016-07-13 23:58:33,012
[VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2)]
INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(377))
- Now scanning bpid BP-1728296442-172.17.0.2-1468454312634 on volume
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2
2016-07-13 23:58:33,012
[VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1)]
INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(377))
- Now scanning bpid BP-1728296442-172.17.0.2-1468454312634 on volume
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1
2016-07-13 23:58:33,012
[VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2)]
INFO datanode.VolumeScanner (VolumeScanner.java:runLoop(532)) -
VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2,
DS-1f9e5e15-ac05-4c41-b5a3-abd3a7bd22ec): finished scanning block pool
BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:33,012 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DirectoryScanner
(DirectoryScanner.java:start(472)) - Periodic Directory Tree Verification scan
starting at 1468474889012ms with interval of 21600000ms
2016-07-13 23:58:33,013
[VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1)]
INFO datanode.VolumeScanner (VolumeScanner.java:runLoop(532)) -
VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
DS-81db5345-3d1b-4408-b503-17a174d127e6): finished scanning block pool
BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:33,013 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BPServiceActor.java:register(683)) - Block pool
BP-1728296442-172.17.0.2-1468454312634 (Datanode Uuid
32c80269-83e0-49a2-8f56-a7bdf2ae9d1a) service to localhost/127.0.0.1:53748
beginning handshake with NN
2016-07-13 23:58:33,013
[VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2)]
INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(394))
-
VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2,
DS-1f9e5e15-ac05-4c41-b5a3-abd3a7bd22ec): no suitable block pools found to
scan. Waiting 1814399999 ms.
2016-07-13 23:58:33,014
[VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1)]
INFO datanode.VolumeScanner (VolumeScanner.java:findNextUsableBlockIter(394))
-
VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
DS-81db5345-3d1b-4408-b503-17a174d127e6): no suitable block pools found to
scan. Waiting 1814399998 ms.
2016-07-13 23:58:33,015 [IPC Server handler 2 on 53748] INFO hdfs.StateChange
(DatanodeManager.java:registerDatanode(951)) - BLOCK* registerDatanode: from
DatanodeRegistration(127.0.0.1:33522,
datanodeUuid=32c80269-83e0-49a2-8f56-a7bdf2ae9d1a, infoPort=48106,
infoSecurePort=0, ipcPort=47206,
storageInfo=lv=-57;cid=testClusterID;nsid=1350066422;c=1468454312634) storage
32c80269-83e0-49a2-8f56-a7bdf2ae9d1a
2016-07-13 23:58:33,015 [IPC Server handler 2 on 53748] INFO
net.NetworkTopology (NetworkTopology.java:add(435)) - Adding a new node:
/default-rack/127.0.0.1:33522
2016-07-13 23:58:33,016 [IPC Server handler 2 on 53748] INFO
blockmanagement.BlockReportLeaseManager
(BlockReportLeaseManager.java:registerNode(204)) - Registered DN
32c80269-83e0-49a2-8f56-a7bdf2ae9d1a (127.0.0.1:33522).
2016-07-13 23:58:33,018 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BPServiceActor.java:register(702)) - Block pool Block pool
BP-1728296442-172.17.0.2-1468454312634 (Datanode Uuid
32c80269-83e0-49a2-8f56-a7bdf2ae9d1a) service to localhost/127.0.0.1:53748
successfully registered with NN
2016-07-13 23:58:33,018 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BPServiceActor.java:offerService(540)) - For namenode
localhost/127.0.0.1:53748 using BLOCKREPORT_INTERVAL of 21600000msec
CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2016-07-13 23:58:33,022 [IPC Server handler 3 on 53748] INFO
blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(911))
- Adding new storage ID DS-81db5345-3d1b-4408-b503-17a174d127e6 for DN
127.0.0.1:33522
2016-07-13 23:58:33,023 [IPC Server handler 3 on 53748] INFO
blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateStorage(911))
- Adding new storage ID DS-1f9e5e15-ac05-4c41-b5a3-abd3a7bd22ec for DN
127.0.0.1:33522
2016-07-13 23:58:33,024 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BPOfferService.java:updateActorStatesFromHeartbeat(514)) - Namenode Block pool
BP-1728296442-172.17.0.2-1468454312634 (Datanode Uuid
32c80269-83e0-49a2-8f56-a7bdf2ae9d1a) service to localhost/127.0.0.1:53748
trying to claim ACTIVE state with txid=1
2016-07-13 23:58:33,025 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BPOfferService.java:updateActorStatesFromHeartbeat(526)) - Acknowledging
ACTIVE Namenode Block pool BP-1728296442-172.17.0.2-1468454312634 (Datanode
Uuid 32c80269-83e0-49a2-8f56-a7bdf2ae9d1a) service to localhost/127.0.0.1:53748
2016-07-13 23:58:33,026 [Block report processor] INFO
blockmanagement.BlockManager (BlockManager.java:processReport(2178)) -
Processing first storage report for DS-81db5345-3d1b-4408-b503-17a174d127e6
from datanode 32c80269-83e0-49a2-8f56-a7bdf2ae9d1a
2016-07-13 23:58:33,027 [Block report processor] INFO BlockStateChange
(BlockManager.java:processReport(2229)) - BLOCK* processReport: from storage
DS-81db5345-3d1b-4408-b503-17a174d127e6 node
DatanodeRegistration(127.0.0.1:33522,
datanodeUuid=32c80269-83e0-49a2-8f56-a7bdf2ae9d1a, infoPort=48106,
infoSecurePort=0, ipcPort=47206,
storageInfo=lv=-57;cid=testClusterID;nsid=1350066422;c=1468454312634), blocks:
0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
2016-07-13 23:58:33,028 [Block report processor] INFO
blockmanagement.BlockManager (BlockManager.java:processReport(2178)) -
Processing first storage report for DS-1f9e5e15-ac05-4c41-b5a3-abd3a7bd22ec
from datanode 32c80269-83e0-49a2-8f56-a7bdf2ae9d1a
2016-07-13 23:58:33,028 [Block report processor] INFO BlockStateChange
(BlockManager.java:processReport(2229)) - BLOCK* processReport: from storage
DS-1f9e5e15-ac05-4c41-b5a3-abd3a7bd22ec node
DatanodeRegistration(127.0.0.1:33522,
datanodeUuid=32c80269-83e0-49a2-8f56-a7bdf2ae9d1a, infoPort=48106,
infoSecurePort=0, ipcPort=47206,
storageInfo=lv=-57;cid=testClusterID;nsid=1350066422;c=1468454312634), blocks:
0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
2016-07-13 23:58:33,030 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BPServiceActor.java:blockReport(389)) - Successfully sent block report
0x5b66785bbb56291f, containing 2 storage report(s), of which we sent 2. The
reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and
5 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-07-13 23:58:33,030 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BPOfferService.java:processCommandFromActive(696)) - Got finalize command for
block pool BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:33,041 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:waitActive(2543)) - Cluster is active
2016-07-13 23:58:33,046 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:waitActive(2543)) - Cluster is active
2016-07-13 23:58:34,050 [Block report processor] INFO namenode.FSNamesystem
(FSNamesystem.java:writeUnlock(1536)) - FSNamesystem write lock held for 1002
ms via
java.lang.Thread.getStackTrace(Thread.java:1552)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1011)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1537)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4532)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:4504)
2016-07-13 23:58:34,051 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:shutdown(1907)) - Shutting down the Mini HDFS Cluster
2016-07-13 23:58:34,052 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:shutdownDataNodes(1947)) - Shutting down DataNode 0
2016-07-13 23:58:34,052 [main] WARN datanode.DirectoryScanner
(DirectoryScanner.java:shutdown(531)) - DirectoryScanner: shutdown has been
called
2016-07-13 23:58:34,052
[org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2ba33e2c] INFO
datanode.DataNode (DataXceiverServer.java:closeAllPeers(281)) - Closing all
peers.
2016-07-13 23:58:34,054
[VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2)]
INFO datanode.VolumeScanner (VolumeScanner.java:run(629)) -
VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2,
DS-1f9e5e15-ac05-4c41-b5a3-abd3a7bd22ec) exiting.
2016-07-13 23:58:34,056
[VolumeScannerThread(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1)]
INFO datanode.VolumeScanner (VolumeScanner.java:run(629)) -
VolumeScanner(/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1,
DS-81db5345-3d1b-4408-b503-17a174d127e6) exiting.
2016-07-13 23:58:34,098 [main] INFO mortbay.log (Slf4jLog.java:info(67)) -
Stopped SelectChannelConnector@localhost:0
2016-07-13 23:58:34,199 [main] INFO ipc.Server (Server.java:stop(2825)) -
Stopping server on 47206
2016-07-13 23:58:34,201 [IPC Server Responder] INFO ipc.Server
(Server.java:run(1064)) - Stopping IPC Server Responder
2016-07-13 23:58:34,201 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] WARN
datanode.IncrementalBlockReportManager
(IncrementalBlockReportManager.java:waitTillNextIBR(132)) -
IncrementalBlockReportManager interrupted
2016-07-13 23:58:34,201 [IPC Server listener on 47206] INFO ipc.Server
(Server.java:run(927)) - Stopping IPC Server listener on 47206
2016-07-13 23:58:34,203 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] WARN datanode.DataNode
(BPServiceActor.java:run(772)) - Ending block pool service for: Block pool
BP-1728296442-172.17.0.2-1468454312634 (Datanode Uuid
32c80269-83e0-49a2-8f56-a7bdf2ae9d1a) service to localhost/127.0.0.1:53748
2016-07-13 23:58:34,205 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO datanode.DataNode
(BlockPoolManager.java:remove(102)) - Removed Block pool
BP-1728296442-172.17.0.2-1468454312634 (Datanode Uuid
32c80269-83e0-49a2-8f56-a7bdf2ae9d1a)
2016-07-13 23:58:34,206 [DataNode:
[[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/,
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:53748] INFO impl.FsDatasetImpl
(FsDatasetImpl.java:shutdownBlockPool(2696)) - Removing block pool
BP-1728296442-172.17.0.2-1468454312634
2016-07-13 23:58:34,206
[refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1/current/BP-1728296442-172.17.0.2-1468454312634]
WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(180)) - Thread
Interrupted waiting to refresh disk information
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at
org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:176)
at java.lang.Thread.run(Thread.java:745)
2016-07-13 23:58:34,206
[refreshUsed-/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2/current/BP-1728296442-172.17.0.2-1468454312634]
WARN fs.CachingGetSpaceUsed (CachingGetSpaceUsed.java:run(180)) - Thread
Interrupted waiting to refresh disk information
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at
org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:176)
at java.lang.Thread.run(Thread.java:745)
2016-07-13 23:58:34,209 [main] INFO impl.FsDatasetAsyncDiskService
(FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk
service threads
2016-07-13 23:58:34,209 [main] INFO impl.FsDatasetAsyncDiskService
(FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads
have been shut down
2016-07-13 23:58:34,210 [main] INFO impl.RamDiskAsyncLazyPersistService
(RamDiskAsyncLazyPersistService.java:shutdown(169)) - Shutting down all async
lazy persist service threads
2016-07-13 23:58:34,210 [main] INFO impl.RamDiskAsyncLazyPersistService
(RamDiskAsyncLazyPersistService.java:shutdown(176)) - All async lazy persist
service threads have been shut down
2016-07-13 23:58:34,210 [main] INFO datanode.DataNode
(DataNode.java:shutdown(2020)) - Shutdown complete.
2016-07-13 23:58:34,211 [main] INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:stopAndJoinNameNode(1981)) - Shutting down the namenode
2016-07-13 23:58:34,211 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:stopActiveServices(1237)) - Stopping services started for
active state
2016-07-13 23:58:34,211 [main] INFO namenode.FSEditLog
(FSEditLog.java:endCurrentLogSegment(1318)) - Ending log segment 1
2016-07-13 23:58:34,211
[org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@2ab2710]
INFO namenode.FSNamesystem (FSNamesystem.java:run(3846)) -
NameNodeEditLogRoller was interrupted, exiting
2016-07-13 23:58:34,212
[org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@253b380a]
INFO namenode.FSNamesystem (FSNamesystem.java:run(3933)) -
LazyPersistFileScrubber was interrupted, exiting
2016-07-13 23:58:34,212 [main] INFO namenode.FSEditLog
(FSEditLog.java:printStatistics(721)) - Number of transactions: 2 Total time
for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of
syncs: 3 SyncTimes(ms): 2 1
2016-07-13 23:58:34,213 [main] INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1/current/edits_inprogress_0000000000000000001
->
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-1/current/edits_0000000000000000001-0000000000000000002
2016-07-13 23:58:34,214 [main] INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2/current/edits_inprogress_0000000000000000001
->
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/name-0-2/current/edits_0000000000000000001-0000000000000000002
2016-07-13 23:58:34,214 [CacheReplicationMonitor(1385212060)] INFO
blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(170))
- Shutting down CacheReplicationMonitor
2016-07-13 23:58:34,215 [main] INFO ipc.Server (Server.java:stop(2825)) -
Stopping server on 53748
2016-07-13 23:58:34,217 [IPC Server listener on 53748] INFO ipc.Server
(Server.java:run(927)) - Stopping IPC Server listener on 53748
2016-07-13 23:58:34,217 [StorageInfoMonitor] INFO blockmanagement.BlockManager
(BlockManager.java:run(4216)) - Stopping thread.
2016-07-13 23:58:34,217 [ReplicationMonitor] INFO blockmanagement.BlockManager
(BlockManager.java:run(4181)) - Stopping ReplicationMonitor.
2016-07-13 23:58:34,217 [IPC Server Responder] INFO ipc.Server
(Server.java:run(1064)) - Stopping IPC Server Responder
2016-07-13 23:58:34,225 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:stopActiveServices(1237)) - Stopping services started for
active state
2016-07-13 23:58:34,226 [main] INFO namenode.FSNamesystem
(FSNamesystem.java:stopStandbyServices(1330)) - Stopping services started for
standby state
2016-07-13 23:58:34,229 [main] INFO mortbay.log (Slf4jLog.java:info(67)) -
Stopped SelectChannelConnector@localhost:0
2016-07-13 23:58:34,331 [main] INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
2016-07-13 23:58:34,333 [main] INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
2016-07-13 23:58:34,334 [main] INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:shutdown(607)) - DataNode metrics system shutdown
complete.
{noformat}
> Fix intermittently failing TestBlockManager#testBlockReportQueueing
> -------------------------------------------------------------------
>
> Key: HDFS-10641
> URL: https://issues.apache.org/jira/browse/HDFS-10641
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode, test
> Reporter: Mingliang Liu
>
> See example failing [stack
> trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].
> h6. Stacktrace
> {code}
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]