[jira] [Comment Edited] (HDFS-9096) Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0

2016-10-19 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588854#comment-15588854
 ] 

Kihwal Lee edited comment on HDFS-9096 at 10/19/16 2:05 PM:


Were you issuing rollback with 2.7.2?


was (Author: kihwal):
Were you issuing rollback with 2.7.1?

> Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0
> 
>
> Key: HDFS-9096
> URL: https://issues.apache.org/jira/browse/HDFS-9096
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.4.0
>Reporter: Harpreet Kaur
>
> I tried to do rolling upgrade from hadoop 2.4.0 to hadoop 2.7.1. As per 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade
>  one can rollback to previous release provided the finalise step is not done. 
> I upgraded the setup but didnot finalise the upgrade and tried to rollback 
> HDFS to 2.4.0
> I tried the following steps
>   1.  Shutdown all NNs and DNs.
>   2.  Restore the pre-upgrade release in all machines.
>   3.  Start NN1 as Active with the "-rollingUpgrade 
> rollback"
>  option.
> I am getting the following error after 3rd step
> 15/09/01 17:53:35 INFO namenode.AclConfigFlag: ACLs enabled? false
> 15/09/01 17:53:35 INFO common.Storage: Lock on <>/in_use.lock 
> acquired by nodename 12152@VM-2
> 15/09/01 17:53:35 WARN namenode.FSNamesystem: Encountered exception loading 
> fsimage
> org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected 
> version of storage directory /data/yarn/namenode. Reported: -63. Expecting = 
> -56.
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:178)
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:608)
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:639)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:455)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:511)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1304)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1370)
> 15/09/01 17:53:35 INFO mortbay.log: Stopped 
> SelectChannelConnector@0.0.0.0:50070
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: Stopping NameNode metrics 
> system...
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: NameNode metrics system 
> stopped.
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: NameNode metrics system 
> shutdown complete.
> 15/09/01 17:53:35 FATAL namenode.NameNode: Exception in namenode join
> From rolling upgrade documentation it can be inferred that rolling upgrade is 
> supported Hadoop 2.4.0 onwards but rollingUpgrade rollback to Hadoop 2.4.0 
> seems to be broken in Hadoop 2.4.0. It throws above mentioned error.
> Are there any other steps to perform rollback (from rolling upgrade) or is it 
> not supported to rollback to Hadoop 2.4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9096) Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0

2016-10-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585675#comment-15585675
 ] 

Vinayakumar B edited comment on HDFS-9096 at 10/18/16 3:02 PM:
---

I think this was fixed automatically by HDFS-7185. which is present in 2.6.0

After the rollingUpgrade to version with layoutVersion change, upgraded VERSION 
file will contain new version.
Without HDFS-7185, setting of layout version will fail as VERSION file contains 
new layout version.

After HDFS-7185, while loading the rollback image, "layoutVersion" will be 
ignored from VERSION file and set same as Software's VERSION, but it will be 
checked strictly from image file.

Following changes in NNStorage.java will do the this.
{code}  void readProperties(StorageDirectory sd, StartupOption startupOption)
  throws IOException {
Properties props = readPropertiesFile(sd.getVersionFile());
if (HdfsServerConstants.RollingUpgradeStartupOption.ROLLBACK.matches
(startupOption)) {
  int lv = Integer.parseInt(getProperty(props, sd, "layoutVersion"));
  if (lv > getServiceLayoutVersion()) {
// we should not use a newer version for rollingUpgrade rollback
throw new IncorrectVersionException(getServiceLayoutVersion(), lv,
"storage directory " + sd.getRoot().getAbsolutePath());
  }
  props.setProperty("layoutVersion",
  Integer.toString(HdfsServerConstants.NAMENODE_LAYOUT_VERSION));
}
setFieldsFromProperties(props, sd);
  }{code}


But in 2.8.0 in HDFS-8432, updating the VERSION file on rolling upgrade is 
avoided.


was (Author: vinayrpet):
I think this was fixed automatically by HDFS-7185. which is present in 2.6.0

After the rollingUpgrade to version with layoutVersion change, upgraded VERSION 
file will contain new version.
Without HDFS-7185, setting of layout version will fail as VERSION file contains 
new layout version.

After HDFS-7185, while loading the rollback image, "layoutVersion" will be 
ignored from VERSION file and set same as Software's VERSION, but it will be 
checked strictly from image file.

> Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0
> 
>
> Key: HDFS-9096
> URL: https://issues.apache.org/jira/browse/HDFS-9096
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.4.0
>Reporter: Harpreet Kaur
>
> I tried to do rolling upgrade from hadoop 2.4.0 to hadoop 2.7.1. As per 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade
>  one can rollback to previous release provided the finalise step is not done. 
> I upgraded the setup but didnot finalise the upgrade and tried to rollback 
> HDFS to 2.4.0
> I tried the following steps
>   1.  Shutdown all NNs and DNs.
>   2.  Restore the pre-upgrade release in all machines.
>   3.  Start NN1 as Active with the "-rollingUpgrade 
> rollback"
>  option.
> I am getting the following error after 3rd step
> 15/09/01 17:53:35 INFO namenode.AclConfigFlag: ACLs enabled? false
> 15/09/01 17:53:35 INFO common.Storage: Lock on <>/in_use.lock 
> acquired by nodename 12152@VM-2
> 15/09/01 17:53:35 WARN namenode.FSNamesystem: Encountered exception loading 
> fsimage
> org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected 
> version of storage directory /data/yarn/namenode. Reported: -63. Expecting = 
> -56.
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:178)
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:608)
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:639)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:455)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:511)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1304)
> at 

[jira] [Comment Edited] (HDFS-9096) Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0

2016-10-18 Thread Dinesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475894#comment-15475894
 ] 

Dinesh edited comment on HDFS-9096 at 10/18/16 1:35 PM:


Facing same issue when I Rollback (after rolling upgrade) from Hadoop 2.7.2 to 
2.5.2; Could any one please tell, is this a known bug?

Based on below logs, please suggest can we consider this issue as a new bug,

My Name node log details:

|C:\SDK\Hadoop\bin>hdfs namenode -rollingUpgrade rollback
16/10/18 18:58:25 INFO namenode.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = myserver/192.168.10.1
STARTUP_MSG:   args = [-rollingUpgrade, rollback]
STARTUP_MSG:   version = 2.5.2
STARTUP_MSG:   classpath = ...
STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'Dinesh' on 
2016-01-14T11:05Z
STARTUP_MSG:   java = 1.7.0_51
/
16/10/18 18:58:25 INFO namenode.NameNode: createNameNode [-rollingUpgrade, 
rollback]
16/10/18 18:58:25 INFO impl.MetricsConfig: loaded properties from 
hadoop-metrics2.properties
16/10/18 18:58:25 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 
second(s).
16/10/18 18:58:25 INFO impl.MetricsSystemImpl: NameNode metrics system started
16/10/18 18:58:25 INFO namenode.NameNode: fs.defaultFS is hdfs://hacluster
16/10/18 18:58:25 INFO namenode.NameNode: Clients are to use hacluster to 
access this namenode/service.
16/10/18 18:58:26 INFO hdfs.DFSUtil: Starting web server as: 
${dfs.web.authentication.kerberos.principal}
16/10/18 18:58:26 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: 
http://myserver.root.Dinesh.lan:50070
16/10/18 18:58:26 INFO mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
16/10/18 18:58:26 INFO http.HttpRequestLog: Http request log for 
http.requests.namenode is not defined
16/10/18 18:58:26 INFO http.HttpServer2: Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
16/10/18 18:58:26 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context hdfs
16/10/18 18:58:26 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
16/10/18 18:58:26 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
16/10/18 18:58:26 INFO http.HttpServer2: Added filter 
'org.apache.hadoop.hdfs.web.AuthFilter' 
(class=org.apache.hadoop.hdfs.web.AuthFilter)
16/10/18 18:58:26 INFO http.HttpServer2: addJerseyResourcePackage: 
packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
 pathSpec=/webhdfs/v1/*
16/10/18 18:58:26 INFO http.HttpServer2: Jetty bound to port 50070
16/10/18 18:58:26 INFO mortbay.log: jetty-6.1.26
16/10/18 18:58:26 WARN server.AuthenticationFilter: 'signature.secret' 
configuration not set, using a random value as secret
16/10/18 18:58:26 INFO mortbay.log: Started 
HttpServer2$selectchannelconnectorwithsafestar...@myserver.root.dinesh.lan:50070
16/10/18 18:58:26 WARN namenode.FSNamesystem: Only one image storage directory 
(dfs.namenode.name.dir) configured. Beware of data loss due to lack of 
redundant storage directories!
16/10/18 18:58:26 INFO namenode.FSNamesystem: fsLock is fair:true
16/10/18 18:58:26 INFO blockmanagement.DatanodeManager: 
dfs.block.invalidate.limit=1000
16/10/18 18:58:26 INFO blockmanagement.DatanodeManager: 
dfs.namenode.datanode.registration.ip-hostname-check=true
16/10/18 18:58:26 INFO blockmanagement.BlockManager: 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/10/18 18:58:26 INFO blockmanagement.BlockManager: The block deletion will 
start around 2016 Oct 18 18:58:26
16/10/18 18:58:26 INFO util.GSet: Computing capacity for map BlocksMap
16/10/18 18:58:26 INFO util.GSet: VM type   = 64-bit
16/10/18 18:58:26 INFO util.GSet: 2.0% max memory 910.5 MB = 18.2 MB
16/10/18 18:58:26 INFO util.GSet: capacity  = 2^21 = 2097152 entries
16/10/18 18:58:26 INFO blockmanagement.BlockManager: 
dfs.block.access.token.enable=false
16/10/18 18:58:26 INFO blockmanagement.BlockManager: defaultReplication 
= 3
16/10/18 18:58:26 INFO blockmanagement.BlockManager: maxReplication 
= 512
16/10/18 18:58:26 INFO blockmanagement.BlockManager: minReplication 
= 1
16/10/18 18:58:26 INFO blockmanagement.BlockManager: maxReplicationStreams  
= 2
16/10/18 18:58:26 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  
= false
16/10/18 18:58:26 INFO blockmanagement.BlockManager: replicationRecheckInterval 
= 3000
16/10/18 18:58:26 INFO blockmanagement.BlockManager: 

[jira] [Comment Edited] (HDFS-9096) Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0

2016-10-18 Thread Dinesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475894#comment-15475894
 ] 

Dinesh edited comment on HDFS-9096 at 10/18/16 10:41 AM:
-

Facing same issue when I Rollback (after rolling upgrade) from Hadoop 2.7.2 to 
2.5.2; Could any one please tell, is this a known bug?

Based on below logs, please suggest can we consider this issue as a new bug,

My Name node log details:

|C:\Hadoop\bin>hdfs namenode -rollingUpgrade rollback
16/10/18 15:54:47 INFO namenode.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = myservernode/192.168.1.21
STARTUP_MSG:   args = [-rollingUpgrade, rollback]
STARTUP_MSG:   version = 2.7.2
STARTUP_MSG:   classpath = 
16/10/18 15:54:47 INFO namenode.NameNode: createNameNode [-rollingUpgrade, 
rollback]
16/10/18 15:54:48 INFO impl.MetricsConfig: loaded properties from 
hadoop-metrics2.properties
16/10/18 15:54:48 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 
second(s).
16/10/18 15:54:48 INFO impl.MetricsSystemImpl: NameNode metrics system started
16/10/18 15:54:48 INFO namenode.NameNode: fs.defaultFS is hdfs://hacluster
16/10/18 15:54:48 INFO namenode.NameNode: Clients are to use hacluster to 
access this namenode/service.
16/10/18 15:54:48 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: 
http://myservernode.root.server.lan:50070
16/10/18 15:54:48 INFO mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
16/10/18 15:54:48 INFO server.AuthenticationFilter: Unable to initialize 
FileSignerSecretProvider, falling back to use random secrets.
16/10/18 15:54:48 INFO http.HttpRequestLog: Http request log for 
http.requests.namenode is not defined
16/10/18 15:54:48 INFO http.HttpServer2: Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
16/10/18 15:54:48 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context hdfs
16/10/18 15:54:48 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
16/10/18 15:54:48 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
16/10/18 15:54:48 INFO http.HttpServer2: Added filter 
'org.apache.hadoop.hdfs.web.AuthFilter' 
(class=org.apache.hadoop.hdfs.web.AuthFilter)
16/10/18 15:54:48 INFO http.HttpServer2: addJerseyResourcePackage: 
packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
 pathSpec=/webhdfs/v1/*
16/10/18 15:54:48 INFO http.HttpServer2: Jetty bound to port 50070
16/10/18 15:54:48 INFO mortbay.log: jetty-6.1.26
16/10/18 15:54:48 INFO mortbay.log: Started 
HttpServer2$selectchannelconnectorwithsafestar...@myservernode.root.server.lan:50070
16/10/18 15:54:48 WARN namenode.FSNamesystem: Only one image storage directory 
(dfs.namenode.name.dir) configured. Beware of data loss due to lack of 
redundant storage directories!
16/10/18 15:54:48 INFO namenode.FSNamesystem: No KeyProvider found.
16/10/18 15:54:48 INFO namenode.FSNamesystem: fsLock is fair:true
16/10/18 15:54:48 INFO blockmanagement.DatanodeManager: 
dfs.block.invalidate.limit=1000
16/10/18 15:54:48 INFO blockmanagement.DatanodeManager: 
dfs.namenode.datanode.registration.ip-hostname-check=true
16/10/18 15:54:48 INFO blockmanagement.BlockManager: 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/10/18 15:54:48 INFO blockmanagement.BlockManager: The block deletion will 
start around 2016 Oct 18 15:54:48
16/10/18 15:54:48 INFO util.GSet: Computing capacity for map BlocksMap
16/10/18 15:54:48 INFO util.GSet: VM type   = 64-bit
16/10/18 15:54:48 INFO util.GSet: 2.0% max memory 455 MB = 9.1 MB
16/10/18 15:54:48 INFO util.GSet: capacity  = 2^20 = 1048576 entries
16/10/18 15:54:48 INFO blockmanagement.BlockManager: 
dfs.block.access.token.enable=false
16/10/18 15:54:48 INFO blockmanagement.BlockManager: defaultReplication 
= 3
16/10/18 15:54:48 INFO blockmanagement.BlockManager: maxReplication 
= 512
16/10/18 15:54:48 INFO blockmanagement.BlockManager: minReplication 
= 1
16/10/18 15:54:48 INFO blockmanagement.BlockManager: maxReplicationStreams  
= 2
16/10/18 15:54:48 INFO blockmanagement.BlockManager: replicationRecheckInterval 
= 3000
16/10/18 15:54:48 INFO blockmanagement.BlockManager: encryptDataTransfer
= false
16/10/18 15:54:48 INFO blockmanagement.BlockManager: maxNumBlocksToLog  
= 1000
16/10/18 15:54:48 INFO namenode.FSNamesystem: fsOwner = SYSTEM 
(auth:SIMPLE)
16/10/18 15:54:48 INFO namenode.FSNamesystem: supergroup  = supergroup
16/10/18 

[jira] [Comment Edited] (HDFS-9096) Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0

2016-10-18 Thread Dinesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15475894#comment-15475894
 ] 

Dinesh edited comment on HDFS-9096 at 10/18/16 10:39 AM:
-

Facing same issue when I Rollback (after rolling upgrade) from Hadoop 2.7.2 to 
2.5.2; Could any one please tell, is this a known bug?
My Name node log details:

|C:\Hadoop\bin>hdfs namenode -rollingUpgrade rollback
16/10/18 15:54:47 INFO namenode.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = myservernode/192.168.1.21
STARTUP_MSG:   args = [-rollingUpgrade, rollback]
STARTUP_MSG:   version = 2.7.2
STARTUP_MSG:   classpath = 
16/10/18 15:54:47 INFO namenode.NameNode: createNameNode [-rollingUpgrade, 
rollback]
16/10/18 15:54:48 INFO impl.MetricsConfig: loaded properties from 
hadoop-metrics2.properties
16/10/18 15:54:48 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 
second(s).
16/10/18 15:54:48 INFO impl.MetricsSystemImpl: NameNode metrics system started
16/10/18 15:54:48 INFO namenode.NameNode: fs.defaultFS is hdfs://hacluster
16/10/18 15:54:48 INFO namenode.NameNode: Clients are to use hacluster to 
access this namenode/service.
16/10/18 15:54:48 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: 
http://myservernode.root.server.lan:50070
16/10/18 15:54:48 INFO mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
16/10/18 15:54:48 INFO server.AuthenticationFilter: Unable to initialize 
FileSignerSecretProvider, falling back to use random secrets.
16/10/18 15:54:48 INFO http.HttpRequestLog: Http request log for 
http.requests.namenode is not defined
16/10/18 15:54:48 INFO http.HttpServer2: Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
16/10/18 15:54:48 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context hdfs
16/10/18 15:54:48 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
16/10/18 15:54:48 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
16/10/18 15:54:48 INFO http.HttpServer2: Added filter 
'org.apache.hadoop.hdfs.web.AuthFilter' 
(class=org.apache.hadoop.hdfs.web.AuthFilter)
16/10/18 15:54:48 INFO http.HttpServer2: addJerseyResourcePackage: 
packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
 pathSpec=/webhdfs/v1/*
16/10/18 15:54:48 INFO http.HttpServer2: Jetty bound to port 50070
16/10/18 15:54:48 INFO mortbay.log: jetty-6.1.26
16/10/18 15:54:48 INFO mortbay.log: Started 
HttpServer2$selectchannelconnectorwithsafestar...@myservernode.root.server.lan:50070
16/10/18 15:54:48 WARN namenode.FSNamesystem: Only one image storage directory 
(dfs.namenode.name.dir) configured. Beware of data loss due to lack of 
redundant storage directories!
16/10/18 15:54:48 INFO namenode.FSNamesystem: No KeyProvider found.
16/10/18 15:54:48 INFO namenode.FSNamesystem: fsLock is fair:true
16/10/18 15:54:48 INFO blockmanagement.DatanodeManager: 
dfs.block.invalidate.limit=1000
16/10/18 15:54:48 INFO blockmanagement.DatanodeManager: 
dfs.namenode.datanode.registration.ip-hostname-check=true
16/10/18 15:54:48 INFO blockmanagement.BlockManager: 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/10/18 15:54:48 INFO blockmanagement.BlockManager: The block deletion will 
start around 2016 Oct 18 15:54:48
16/10/18 15:54:48 INFO util.GSet: Computing capacity for map BlocksMap
16/10/18 15:54:48 INFO util.GSet: VM type   = 64-bit
16/10/18 15:54:48 INFO util.GSet: 2.0% max memory 455 MB = 9.1 MB
16/10/18 15:54:48 INFO util.GSet: capacity  = 2^20 = 1048576 entries
16/10/18 15:54:48 INFO blockmanagement.BlockManager: 
dfs.block.access.token.enable=false
16/10/18 15:54:48 INFO blockmanagement.BlockManager: defaultReplication 
= 3
16/10/18 15:54:48 INFO blockmanagement.BlockManager: maxReplication 
= 512
16/10/18 15:54:48 INFO blockmanagement.BlockManager: minReplication 
= 1
16/10/18 15:54:48 INFO blockmanagement.BlockManager: maxReplicationStreams  
= 2
16/10/18 15:54:48 INFO blockmanagement.BlockManager: replicationRecheckInterval 
= 3000
16/10/18 15:54:48 INFO blockmanagement.BlockManager: encryptDataTransfer
= false
16/10/18 15:54:48 INFO blockmanagement.BlockManager: maxNumBlocksToLog  
= 1000
16/10/18 15:54:48 INFO namenode.FSNamesystem: fsOwner = SYSTEM 
(auth:SIMPLE)
16/10/18 15:54:48 INFO namenode.FSNamesystem: supergroup  = supergroup
16/10/18 15:54:48 INFO namenode.FSNamesystem: isPermissionEnabled = false
16/10/18