Ahh, this did the trick, I had to get the name node out of same mode however before it fully worked.
Thanks! On Tue, Jun 2, 2015 at 12:09 AM, Akhil Das <ak...@sigmoidanalytics.com> wrote: > It says your namenode is down (connection refused on 8020), you can restart > your HDFS by going into hadoop directory and typing sbin/stop-dfs.sh and > then sbin/start-dfs.sh > > Thanks > Best Regards > > On Tue, Jun 2, 2015 at 5:03 AM, Su She <suhsheka...@gmail.com> wrote: >> >> Hello All, >> >> A bit scared I did something stupid...I killed a few PIDs that were >> listening to ports 2183 (kafka), 4042 (spark app), some of the PIDs >> didn't even seem to be stopped as they still are running when i do >> >> lsof -i:[port number] >> >> I'm not sure if the problem started after or before I did these kill >> commands, but I now can't connect to HDFS or start spark. I can't seem >> to access Hue. I am afraid I accidentally killed an important process >> related to HDFS. But, I am not sure what it would be as I couldn't >> even kill the PIDs. >> >> Is it a coincidence that HDFS failed randomly? Likely that I killed an >> important PID? How can I maybe restart HDFS? >> >> Thanks a lot! >> >> Error on Hue: >> >> Cannot access: /user/ec2-user. The HDFS REST service is not available. >> Note: You are a Hue admin but not a HDFS superuser (which is "hdfs"). >> >> HTTPConnectionPool(host='ec2-ip-address.us-west-1.compute.amazonaws.com', >> port=50070): Max retries exceeded with url: >> /webhdfs/v1/user/ec2-user?op=GETFILESTATUS&user.name=hue&doas=ec2-user >> (Caused by <class 'socket.error'>: [Errno 111] Connection refused) >> >> Error when I try to open spark-shell or a spark app: >> java.net.ConnectException: Call From >> ip-10-0-2-216.us-west-1.compute.internal/10.0.2.216 to >> ip-10-0-2-216.us-west-1.compute.internal:8020 failed on connection >> exception: java.net.ConnectException: Connection refused; For more >> details see: http://wiki.apache.org/hadoop/ConnectionRefused >> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native >> Method) >> at >> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) >> at >> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) >> at java.lang.reflect.Constructor.newInstance(Constructor.java:526) >> at >> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) >> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) >> at org.apache.hadoop.ipc.Client.call(Client.java:1415) >> at org.apache.hadoop.ipc.Client.call(Client.java:1364) >> at >> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) >> at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source) >> at >> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:744) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at >> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) >> at >> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) >> at com.sun.proxy.$Proxy21.getFileInfo(Unknown Source) >> at >> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1921) >> at >> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089) >> at >> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085) >> at >> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) >> at >> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085) >> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) >> at >> org.apache.spark.util.FileLogger.createLogDir(FileLogger.scala:123) >> at org.apache.spark.util.FileLogger.start(FileLogger.scala:115) >> at >> org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:74) >> at org.apache.spark.SparkContext.<init>(SparkContext.scala:353) >> at >> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:986) >> at $iwC$$iwC.<init>(<console>:9) >> at $iwC.<init>(<console>:18) >> at <init>(<console>:20) >> at .<init>(<console>:24) >> at .<clinit>(<console>) >> at .<init>(<console>:7) >> at .<clinit>(<console>) >> at $print(<console>) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at >> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:852) >> at >> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1125) >> at >> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:674) >> at >> org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:705) >> at >> org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:669) >> at >> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:828) >> at >> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:873) >> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:785) >> at >> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:123) >> at >> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122) >> at >> org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:270) >> at >> org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122) >> at >> org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:60) >> at >> org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:945) >> at >> org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:147) >> at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:60) >> at >> org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106) >> at >> org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:60) >> at >> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:962) >> at >> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916) >> at >> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:916) >> at >> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) >> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:916) >> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1011) >> at org.apache.spark.repl.Main$.main(Main.scala:31) >> at org.apache.spark.repl.Main.main(Main.scala) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at >> org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358) >> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) >> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) >> Caused by: java.net.ConnectException: Connection refused >> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >> at >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) >> at >> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) >> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) >> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) >> at >> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606) >> at >> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700) >> at >> org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) >> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463) >> at org.apache.hadoop.ipc.Client.call(Client.java:1382) >> ... 67 more >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >> For additional commands, e-mail: user-h...@spark.apache.org >> > --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org