milleruntime opened a new issue #1886:
URL: https://github.com/apache/accumulo/issues/1886


   Exception occurs during startup of a new cluster.
   <pre>
   2021-01-26T15:06:44,962 [fs.VolumeManagerImpl] DEBUG: exception getting EC 
policy for hdfs://localhost:8020/accumulo/wal
   java.io.FileNotFoundException: Path not found: /accumulo/wal
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirErasureCodingOp.getErasureCodingPolicy(FSDirErasureCodingOp.java:371)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getErasureCodingPolicy(FSNamesystem.java:8170)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getErasureCodingPolicy(NameNodeRpcServer.java:2504)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getErasureCodingPolicy(ClientNamenodeProtocolServerSideTranslatorPB.java:1938)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:532)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)
           at java.base/java.security.AccessController.doPrivileged(Native 
Method)
           at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)
   
           at 
jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
~[?:?]
           at 
jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:?]
           at 
jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:?]
           at java.lang.reflect.Constructor.newInstance(Constructor.java:490) 
~[?:?]
           at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DFSClient.getErasureCodingPolicy(DFSClient.java:3182) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DistributedFileSystem$67.doCall(DistributedFileSystem.java:3089)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DistributedFileSystem$67.doCall(DistributedFileSystem.java:3086)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DistributedFileSystem.getErasureCodingPolicy(DistributedFileSystem.java:3103)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.accumulo.server.fs.VolumeManagerImpl.canSyncAndFlush(VolumeManagerImpl.java:436)
 ~[accumulo-server-base-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
           at 
org.apache.accumulo.tserver.TabletServer.checkWalCanSync(TabletServer.java:979) 
~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
           at 
org.apache.accumulo.tserver.TabletServer.<init>(TabletServer.java:255) 
~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
           at 
org.apache.accumulo.tserver.TabletServer.main(TabletServer.java:232) 
~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
           at 
org.apache.accumulo.tserver.TServerExecutable.execute(TServerExecutable.java:45)
 ~[accumulo-tserver-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
           at 
org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:126) 
~[accumulo-start-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT]
           at java.lang.Thread.run(Thread.java:834) [?:?]
   Caused by: org.apache.hadoop.ipc.RemoteException: Path not found: 
/accumulo/wal
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirErasureCodingOp.getErasureCodingPolicy(FSDirErasureCodingOp.java:371)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getErasureCodingPolicy(FSNamesystem.java:8170)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getErasureCodingPolicy(NameNodeRpcServer.java:2504)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getErasureCodingPolicy(ClientNamenodeProtocolServerSideTranslatorPB.java:1938)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:532)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)
   </pre>
   
   It seems this only happens during the first time during startup of a new 
cluster before the WAL directory is created yet.  It may be easy to prevent 
this error from being printed since its just noise on the debug log.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to