[
https://issues.apache.org/jira/browse/HDFS-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17149771#comment-17149771
]
Uma Maheswara Rao G commented on HDFS-15450:
--------------------------------------------
{noformat}
Error encountered requiring NN shutdown. Shutting down immediately.
java.io.IOException: ViewFs: Cannot initialize: Empty Mount table in config for
viewfs://viewfs-1.viewfs.root.hwx.site/
at org.apache.hadoop.fs.viewfs.InodeTree.<init>(InodeTree.java:599)
at
org.apache.hadoop.fs.viewfs.ViewFileSystem$1.<init>(ViewFileSystem.java:278)
at
org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:278)
at
org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme.initialize(ViewFileSystemOverloadScheme.java:129)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3396)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3456)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3424)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:518)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:266)
at
org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:874)
at
org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:871)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
at
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:515)
at
org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:496)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:870)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.access$100(NameNode.java:216)
at
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1934)
at
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
at
org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
at
org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:59)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1777)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1773)
at
org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:112)
at
org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:5409)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:985)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:913)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2882)
{noformat}
Here core-site.xml contains fs.defaultFS=hdfs://ns1, but when active NN is
starting up, its resetting RPC address with fs.defaultFS. So, its getting
viewfs-1.viewfs.root.hwx.site. We have mount table only with cluster name
"ns1". So, FS initialization is failing.
{code}
// If the RPC address is set use it to (re-)configure the default FS
if (conf.get(DFS_NAMENODE_RPC_ADDRESS_KEY) != null) {
URI defaultUri = URI.create(HdfsConstants.HDFS_URI_SCHEME + "://"
+ conf.get(DFS_NAMENODE_RPC_ADDRESS_KEY));
conf.set(FS_DEFAULT_NAME_KEY, defaultUri.toString());
LOG.debug("Setting {} to {}", FS_DEFAULT_NAME_KEY, defaultUri);
}
{code}
Currently trash initialization uses FileSystem.get(conf) got get the fs
instance. This is more generic way to get fs. In NN case, we are sure fs is
always DistributeFileSystem only.
So , to solve this issue, we don't need NN to initialize the mount table init.
But FileSystem.get will do that if the core-site.xml contains
ViewFSOverloadScheme configs. So, it's good to initialize DistributedFileSystem
class directly instead of FileSystem.get.
> Fix NN trash emptier to work in HA mode if ViewFSOveroadScheme enabled
> ----------------------------------------------------------------------
>
> Key: HDFS-15450
> URL: https://issues.apache.org/jira/browse/HDFS-15450
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Uma Maheswara Rao G
> Assignee: Uma Maheswara Rao G
> Priority: Major
>
> When users add mount links only fs.defautFS, in HA NN, it will initialize
> trashEmptier with RPC address set to defaultFS. It will fail to start because
> we might not have configure any mount links with RPC address based URI.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]