The socket may have been in TIME_WAIT. Can you try after a bit? The error 
message definitely suggests that some other app is listening on that port.


Thanks,
Hari

On Mon, Nov 10, 2014 at 9:30 PM, Jeniba Johnson
<jeniba.john...@lntinfotech.com> wrote:

> Hi Hari
> Thanks for your kind reply
> Even after killing the process id  of the specific port. Still Iam facing 
> with the similar error.
> The command I use is
> sudo lsof -i -P | grep -i "listen"
> Kill -9 PID
> However If I try to work with the port which is available, still the error 
> remains the same.
> Regards,
> Jeniba Johnson
> From: Hari Shreedharan [mailto:hshreedha...@cloudera.com]
> Sent: Tuesday, November 11, 2014 4:41 AM
> To: Jeniba Johnson
> Cc: dev@spark.apache.org
> Subject: Re: Bind exception while running FlumeEventCount
> Looks like that port is not available because another app is using that port. 
> Can you take a look at netstat -a and use a port that is free?
> Thanks,
> Hari
> On Fri, Nov 7, 2014 at 2:05 PM, Jeniba Johnson 
> <jeniba.john...@lntinfotech.com<mailto:jeniba.john...@lntinfotech.com>> wrote:
> Hi,
> I have installed spark-1.1.0 and apache flume 1.4 for running streaming 
> example FlumeEventCount. Previously the code was working fine. Now Iam facing 
> with the below mentioned issues. My flume is running properly it is able to 
> write the file.
> The command I use is
> bin/run-example org.apache.spark.examples.streaming.FlumeEventCount 
> 172.29.17.178 65001
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Stopping receiver 
> with message: Error starting receiver 0: 
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> /172.29.17.178:65001
> 14/11/07 23:19:23 INFO flume.FlumeReceiver: Flume receiver stopped
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Called receiver onStop
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Deregistering 
> receiver 0
> 14/11/07 23:19:23 ERROR scheduler.ReceiverTracker: Deregistered receiver for 
> stream 0: Error starting receiver 0 - 
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> /172.29.17.178:65001
> at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
> at 
> org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
> at 
> org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
> at 
> org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
> at 
> org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
> at 
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
> at 
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
> at 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> at 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> at org.apache.spark.scheduler.Task.run(Task.scala:54)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:344)
> at sun.nio.ch.Net.bind(Net.java:336)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
> at 
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
> at 
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
> at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
> ... 3 more
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Stopped receiver 0
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopping BlockGenerator
> 14/11/07 23:19:23 INFO util.RecurringTimer: Stopped timer for BlockGenerator 
> after time 1415382563200
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Waiting for block pushing 
> thread
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Pushing out the last 0 blocks
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopped block pushing thread
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopped BlockGenerator
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Waiting for executor 
> stop is over
> 14/11/07 23:19:23 ERROR receiver.ReceiverSupervisorImpl: Stopped executor 
> with error: org.jboss.netty.channel.ChannelException: Failed to bind to: 
> /172.29.17.178:65001
> 14/11/07 23:19:23 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 
> (TID 0)
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> /172.29.17.178:65001
> at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
> at 
> org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
> at 
> org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
> at 
> org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
> at 
> org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
> at 
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
> at 
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
> at 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> at 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> at org.apache.spark.scheduler.Task.run(Task.scala:54)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:344)
> at sun.nio.ch.Net.bind(Net.java:336)
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
> at 
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
> at 
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
> at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
> ... 3 more
> 14/11/07 23:19:23 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 
> (TID 0, localhost): org.jboss.netty.channel.ChannelException: Failed to bind 
> to: /172.29.17.178:65001
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
> org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
> org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
> org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
> org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> org.apache.spark.scheduler.Task.run(Task.scala:54)
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> 14/11/07 23:19:23 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 
> 1 times; aborting job
> 14/11/07 23:19:23 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, 
> whose tasks have all completed, from pool
> 14/11/07 23:19:23 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
> 14/11/07 23:19:23 INFO scheduler.DAGScheduler: Failed to run runJob at 
> ReceiverTracker.scala:275
> Exception in thread "Thread-28" org.apache.spark.SparkException: Job aborted 
> due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent 
> failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): 
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> /172.29.17.178:65001
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
> org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
> org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
> org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
> org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> org.apache.spark.scheduler.Task.run(Task.scala:54)
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> java.lang.Thread.run(Thread.java:722)
> Driver stacktrace:
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
> at scala.Option.foreach(Option.scala:236)
> at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Regards,
> Jeniba Johnson
> ________________________________
> The contents of this e-mail and any attachment(s) may contain confidential or 
> privileged information for the intended recipient(s). Unintended recipients 
> are prohibited from taking action on the basis of information in this e-mail 
> and using or disseminating the information, and must notify the sender and 
> delete it from their system. L&T Infotech will not accept responsibility or 
> liability for the accuracy or completeness of, or the presence of any virus 
> or disabling code in this e-mail"

Reply via email to