ZoukeNN edited a comment on issue #1084: Condense fluo.sh URL: https://github.com/apache/fluo/issues/1084#issuecomment-596627028 I see this in /install/logs/setup/accumulo.err ``` `Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. 2020-03-09 08:57:02,347 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/flyingcanopy/fluo-uno/install/accumulo-2.0.0/conf/accumulo.properties 2020-03-09 08:57:03,708 [init.Initialize] INFO : Hadoop Filesystem is hdfs://localhost:8020 2020-03-09 08:57:03,709 [init.Initialize] INFO : Accumulo data dirs are [hdfs://localhost:8020/accumulo] 2020-03-09 08:57:03,709 [init.Initialize] INFO : Zookeeper server is localhost:2181 2020-03-09 08:57:03,710 [init.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running 2020-03-09 08:57:03,922 [init.Initialize] ERROR: Fatal exception java.io.IOException: Failed to check if filesystem already initialized at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285) at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323) at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991) at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129) at java.base/java.lang.Thread.run(Thread.java:830) Caused by: java.net.ConnectException: Call From ubuntu/192.168.65.128 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:481) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:833) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549) at org.apache.hadoop.ipc.Client.call(Client.java:1491) at org.apache.hadoop.ipc.Client.call(Client.java:1388) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:907) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1666) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1576) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1573) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1588) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1683) at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254) at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860) at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280) ... 4 more Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.pollConnect(Native Method) at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:579) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:820) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:700) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:804) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:421) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1606) at org.apache.hadoop.ipc.Client.call(Client.java:1435) ... 28 more 2020-03-09 08:57:03,928 [start.Main] ERROR: Thread 'init' died. java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem already initialized at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997) at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129) at java.base/java.lang.Thread.run(Thread.java:830) Caused by: java.io.IOException: Failed to check if filesystem already initialized at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285) at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323) at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991) ... 2 more Caused by: java.net.ConnectException: Call From ubuntu/192.168.65.128 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:481) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:833) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549) at org.apache.hadoop.ipc.Client.call(Client.java:1491) at org.apache.hadoop.ipc.Client.call(Client.java:1388) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:907) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1666) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1576) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1573) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1588) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1683) at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254) at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860) at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280) ... 4 more Caused by: java.net.ConnectException: Connection refused at java.base/sun.nio.ch.Net.pollConnect(Native Method) at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:579) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:820) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:700) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:804) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:421) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1606) at org.apache.hadoop.ipc.Client.call(Client.java:1435) ... 28 more` ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
