That helped, thanks TD! :D From: Tathagata Das <tathagata.das1...@gmail.com<mailto:tathagata.das1...@gmail.com>> Date: Tuesday, June 6, 2017 at 3:26 AM To: "Jain, Nishit" <nja...@underarmour.com<mailto:nja...@underarmour.com>> Cc: "user@spark.apache.org<mailto:user@spark.apache.org>" <user@spark.apache.org<mailto:user@spark.apache.org>> Subject: Re: Spark Streaming Job Stuck
http://spark.apache.org/docs/latest/streaming-programming-guide.html#points-to-remember-1 Hope this helps. On Mon, Jun 5, 2017 at 2:51 PM, Jain, Nishit <nja...@underarmour.com<mailto:nja...@underarmour.com>> wrote: I have a very simple spark streaming job running locally in standalone mode. There is a customer receiver which reads from database and pass it to the main job which prints the total. Not an actual use case but I am playing around to learn. Problem is that job gets stuck forever, logic is very simple so I think it is neither doing any processing nor memory issue. What is strange is if I STOP the job, suddenly in logs I see the output of job execution and other backed jobs follow! Can some one help me understand what is going on here? val spark = SparkSession .builder() .master("local[1]") .appName("SocketStream") .getOrCreate() val ssc = new StreamingContext(spark.sparkContext,Seconds(5)) val lines = ssc.receiverStream(new HanaCustomReceiver()) lines.foreachRDD{x => println("==============" + x.count())} ssc.start() ssc.awaitTermination() [enter image description here]<https://i.stack.imgur.com/y1GGr.png> After terminating program following logs roll which shows execution of the batch - 17/06/05 15:56:16 INFO JobGenerator: Stopping JobGenerator immediately 17/06/05 15:56:16 INFO RecurringTimer: Stopped timer for JobGenerator after time 1496696175000 17/06/05 15:56:16 INFO JobGenerator: Stopped JobGenerator ==============100 Thanks!