Hi firemonk9,
Sorry, its been too long but I just saw this. I hope you were able to
resolve it. FWIW, we were able to solve this with the help of the Low Level
Kafka Consumer, instead of the inbuilt Kafka consumer in Spark, from here:
https://github.com/dibbhatt/kafka-spark-consumer/.
Regards
I am using Spark Streaming to process data received through Kafka. The
Spark version is 1.2.0. I have written the code in Java and am compiling it
using sbt. The program runs and receives data from Kafka and processes it
as well. But it stops receiving data suddenly after some time( it has run
for
Hello,
On the web UI of the master even though there are two workers shown, there
is only one executor. There is an executor for machine1 but no executor for
machine2. Hence if only machine1 is added as a worker the program runs but
if only machine2 is added, it fails with the same error 'Master r
Hello,
I am trying to run a job on two workers. I have cluster of 3
computers where one is the master and the other two are workers. I am able
to successfully register the separate physical machines as workers in the
cluster. When I run a job with a single worker connected, it runs
successf
Hello,
Thanks a lot! I installed Maven 3.2.2 and the building worked with maven.
But I also got the prebuilt version to run. So I will be using the prebuilt
version. Is there any downside to using the prebuilt version?
Also could you tell me what I would need to do if I had to build it without
mav
Hello,
I am trying to build Apache Spark version 1.0.1 on Ubuntu 12.04 LTS. After
unzipping the file and running sbt/sbt assembly I get the following error :
rasika@rasikap:~/spark-1.0.1$ sbt/sbt package
Error occurred during initialization of VM
Could not reserve enough space for object heap
Err