[ 
https://issues.apache.org/jira/browse/HDFS-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14278485#comment-14278485
 ] 

frank commented on HDFS-3682:
-----------------------------

Connected to the target VM, address: '127.0.0.1:35785', transport: 'socket'
2015-01-16 01:09:18,052 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:<init>(372)) - starting cluster: numNameNodes=1, 
numDataNodes=1
2015-01-16 01:09:18,914 WARN  util.NativeCodeLoader 
(NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
Formatting using clusterid: testClusterID
2015-01-16 01:09:18,961 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(739)) - fsLock is fair:true
2015-01-16 01:09:19,013 INFO  Configuration.deprecation 
(Configuration.java:warnOnceIfDeprecated(1019)) - 
hadoop.configured.node.mapping is deprecated. Instead, use 
net.topology.configured.node.mapping
2015-01-16 01:09:19,014 INFO  blockmanagement.DatanodeManager 
(DatanodeManager.java:<init>(229)) - dfs.block.invalidate.limit=1000
2015-01-16 01:09:19,014 INFO  blockmanagement.DatanodeManager 
(DatanodeManager.java:<init>(235)) - 
dfs.namenode.datanode.registration.ip-hostname-check=true
2015-01-16 01:09:19,017 INFO  blockmanagement.BlockManager 
(InvalidateBlocks.java:printBlockDeletionTime(71)) - 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-01-16 01:09:19,021 INFO  blockmanagement.BlockManager 
(InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will 
start around 2015 一月 16 01:09:19
2015-01-16 01:09:19,025 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map 
BlocksMap
2015-01-16 01:09:19,025 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2015-01-16 01:09:19,029 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 928 MB = 18.6 MB
2015-01-16 01:09:19,030 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^21 = 2097152 
entries
2015-01-16 01:09:19,056 INFO  blockmanagement.BlockManager 
(BlockManager.java:createBlockTokenSecretManager(354)) - 
dfs.block.access.token.enable=false
2015-01-16 01:09:19,057 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(339)) - defaultReplication         = 1
2015-01-16 01:09:19,057 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(340)) - maxReplication             = 512
2015-01-16 01:09:19,058 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(341)) - minReplication             = 1
2015-01-16 01:09:19,061 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(342)) - maxReplicationStreams      = 2
2015-01-16 01:09:19,062 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(343)) - shouldCheckForEnoughRacks  = false
2015-01-16 01:09:19,062 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(344)) - replicationRecheckInterval = 3000
2015-01-16 01:09:19,063 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(345)) - encryptDataTransfer        = false
2015-01-16 01:09:19,063 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(346)) - maxNumBlocksToLog          = 1000
2015-01-16 01:09:19,086 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(758)) - fsOwner             = root (auth:SIMPLE)
2015-01-16 01:09:19,087 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(759)) - supergroup          = supergroup
2015-01-16 01:09:19,087 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(760)) - isPermissionEnabled = true
2015-01-16 01:09:19,091 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(771)) - HA Enabled: false
2015-01-16 01:09:19,096 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(808)) - Append Enabled: true
2015-01-16 01:09:19,696 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map 
INodeMap
2015-01-16 01:09:19,696 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2015-01-16 01:09:19,697 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 928 MB = 9.3 MB
2015-01-16 01:09:19,697 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^20 = 1048576 
entries
2015-01-16 01:09:19,714 INFO  namenode.NameNode (FSDirectory.java:<init>(209)) 
- Caching file names occuring more than 10 times
2015-01-16 01:09:19,734 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map 
cachedBlocks
2015-01-16 01:09:19,734 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2015-01-16 01:09:19,735 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 928 MB = 2.3 MB
2015-01-16 01:09:19,738 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^18 = 262144 
entries
2015-01-16 01:09:19,740 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(5095)) - dfs.namenode.safemode.threshold-pct = 
0.9990000128746033
2015-01-16 01:09:19,740 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(5096)) - dfs.namenode.safemode.min.datanodes = 0
2015-01-16 01:09:19,741 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(5097)) - dfs.namenode.safemode.extension     = 0
2015-01-16 01:09:19,744 INFO  namenode.FSNamesystem 
(FSNamesystem.java:initRetryCache(892)) - Retry cache on namenode is enabled
2015-01-16 01:09:19,745 INFO  namenode.FSNamesystem 
(FSNamesystem.java:initRetryCache(900)) - Retry cache will use 0.03 of total 
heap and retry cache entry expiry time is 600000 millis
2015-01-16 01:09:19,750 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map 
NameNodeRetryCache
2015-01-16 01:09:19,750 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2015-01-16 01:09:19,751 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max memory 
928 MB = 285.1 KB
2015-01-16 01:09:19,751 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^15 = 32768 
entries
2015-01-16 01:09:19,760 INFO  namenode.NNConf (NNConf.java:<init>(62)) - ACLs 
enabled? false
2015-01-16 01:09:19,760 INFO  namenode.NNConf (NNConf.java:<init>(66)) - XAttrs 
enabled? true
2015-01-16 01:09:19,761 INFO  namenode.NNConf (NNConf.java:<init>(74)) - 
Maximum size of an xattr: 16384
2015-01-16 01:09:19,959 INFO  namenode.FSImage (FSImage.java:format(145)) - 
Allocated new BlockPoolId: BP-371526502-127.0.0.1-1421341759782
2015-01-16 01:09:20,306 INFO  common.Storage (NNStorage.java:format(550)) - 
Storage directory 
/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster/miniclusters/cluster2/name1
 has been successfully formatted.
2015-01-16 01:09:20,377 INFO  common.Storage (NNStorage.java:format(550)) - 
Storage directory 
/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster/miniclusters/cluster2/name2
 has been successfully formatted.
2015-01-16 01:09:20,731 INFO  namenode.NNStorageRetentionManager 
(NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to retain 1 
images with txid >= 0
2015-01-16 01:09:20,739 INFO  namenode.NameNode 
(NameNode.java:createNameNode(1342)) - createNameNode []
2015-01-16 01:09:20,966 WARN  impl.MetricsConfig 
(MetricsConfig.java:loadFirst(124)) - Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2015-01-16 01:09:21,083 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:startTimer(345)) - Scheduled snapshot period at 10 
second(s).
2015-01-16 01:09:21,084 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:start(184)) - NameNode metrics system started
2015-01-16 01:09:21,090 INFO  namenode.NameNode 
(NameNode.java:setClientNamenodeAddress(342)) - fs.defaultFS is 
hdfs://127.0.0.1:0
2015-01-16 01:09:21,147 INFO  hdfs.DFSUtil 
(DFSUtil.java:httpServerTemplateForNNAndJN(1611)) - Starting web server as: 
${dfs.web.authentication.kerberos.principal}
2015-01-16 01:09:21,148 INFO  hdfs.DFSUtil 
(DFSUtil.java:httpServerTemplateForNNAndJN(1622)) - Starting Web-server for 
hdfs at: http://localhost:0
2015-01-16 01:09:21,156 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stop(201)) - Stopping NameNode metrics system...
2015-01-16 01:09:21,157 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stop(207)) - NameNode metrics system stopped.
2015-01-16 01:09:21,158 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:shutdown(573)) - NameNode metrics system shutdown 
complete.
2015-01-16 01:09:21,159 ERROR hdfs.MiniDFSCluster 
(MiniDFSCluster.java:initMiniDFSCluster(709)) - IOE creating namenodes. 
Permissions dump:
path 
'/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster/miniclusters/cluster2/data':
 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster/miniclusters/cluster2/data
        permissions: ----
path 
'/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster/miniclusters/cluster2':
 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster/miniclusters/cluster2
        permissions: drwx
path 
'/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster/miniclusters':
 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster/miniclusters
        permissions: drwx
path 
'/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster':
 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4/TestMiniDFSCluster
        permissions: drwx
path 
'/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4':
 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data/7I1KldK7e4
        permissions: drwx
path 
'/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data':
 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test/data
        permissions: drwx
path 
'/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test':
 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target/test
        permissions: drwx
path 
'/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target': 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs/target
        permissions: drwx
path '/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs': 
        
absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project/hadoop-hdfs
        permissions: drwx
path '/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project': 
        absolute:/root/work/hadoop-2.5.1-src-eclipse/hadoop-hdfs-project
        permissions: drwx
path '/root/work/hadoop-2.5.1-src-eclipse': 
        absolute:/root/work/hadoop-2.5.1-src-eclipse
        permissions: drwx
path '/root/work': 
        absolute:/root/work
        permissions: drwx
path '/root': 
        absolute:/root
        permissions: drwx
path '/': 
        absolute:/
        permissions: drwx


java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH

The exception came out when i run junit test,who can help me

> MiniDFSCluster#init should provide more info when it fails
> ----------------------------------------------------------
>
>                 Key: HDFS-3682
>                 URL: https://issues.apache.org/jira/browse/HDFS-3682
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: test
>    Affects Versions: 2.0.0-alpha
>            Reporter: Eli Collins
>            Assignee: Todd Lipcon
>            Priority: Minor
>              Labels: test-fail
>             Fix For: 2.0.3-alpha
>
>         Attachments: hdfs-3682.txt
>
>
> I've seen the occasional test failure due to the MiniDFSCluster init failing 
> due to "java.io.IOException: Cannot remove data directory" for the 
> target/test/data/dfs/data directory. We should improve the error handling 
> here so we can better understand the failure, eg perhaps one of the tests 
> that chmods a directory fails and causes this failure.
> {noformat}
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:596)
> at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:297)
> at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:116)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:279)
> at 
> org.apache.hadoop.hdfs.security.TestDelegationToken.setUp(TestDelegationToken.java:77
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to