kevinrr888 commented on PR #5073: URL: https://github.com/apache/accumulo/pull/5073#issuecomment-2486386703
One issue with this current impl is that the stack trace will still show an exception (this time a `ConnectException`) even though the calling code handles all `Exception`s. My guess is that the exception is occurring in another thread. So, an output of `accumulo check-server-config` would look like this with no instance running: ``` 2024-11-19T12:16:25,916 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/krathbun/Desktop/github/fluo-uno/install/accumulo-2.1.4-SNAPSHOT/conf/accumulo.properties 2024-11-19T12:16:26,300 [fs.VolumeManager] ERROR: Problem reading instance id out of hdfs at hdfs://localhost:8020/accumulo/instance_id java.net.ConnectException: Call From groot/10.0.0.182 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?] at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77) ~[?:?] at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?] at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500) ~[?:?] at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:481) ~[?:?] at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:930) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:845) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1571) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.Client.call(Client.java:1513) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.Client.call(Client.java:1410) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139) ~[hadoop-client-api-3.3.6.jar:?] at jdk.proxy2/jdk.proxy2.$Proxy30.getListing(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:689) ~[hadoop-client-api-3.3.6.jar:?] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[?:?] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.base/java.lang.reflect.Method.invoke(Method.java:569) ~[?:?] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362) ~[hadoop-client-api-3.3.6.jar:?] at jdk.proxy2/jdk.proxy2.$Proxy31.getListing(Unknown Source) ~[?:?] at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1702) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1686) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1113) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:149) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1188) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1185) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1195) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:214) ~[accumulo-server-base-2.1.4-SNAPSHOT.jar:2.1.4-SNAPSHOT] at org.apache.accumulo.server.conf.CheckServerConfig.main(CheckServerConfig.java:54) ~[accumulo-server-base-2.1.4-SNAPSHOT.jar:2.1.4-SNAPSHOT] at org.apache.accumulo.server.conf.CheckServerConfig.execute(CheckServerConfig.java:78) ~[accumulo-server-base-2.1.4-SNAPSHOT.jar:2.1.4-SNAPSHOT] at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.4-SNAPSHOT.jar:2.1.4-SNAPSHOT] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[?:?] at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) ~[?:?] at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) ~[?:?] at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:205) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:600) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:652) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:773) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:347) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.Client.getConnection(Client.java:1632) ~[hadoop-client-api-3.3.6.jar:?] at org.apache.hadoop.ipc.Client.call(Client.java:1457) ~[hadoop-client-api-3.3.6.jar:?] ... 28 more 2024-11-19T12:16:26,300 [conf.CheckServerConfig] WARN : Performed only a subset of checks (which passed). There is a problem relating to the instance: Can't tell if Accumulo is initialized; can't read instance id at hdfs://localhost:8020/accumulo/instance_id and no further checks could be done. If this is unexpected, make sure an instance is running and re-run the command. ``` The subset of the checks have still occurred and passed (noted in the output), but the exception unfortunately makes the info less visible and it might be better if the exception was not seen here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@accumulo.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org