Hi guys,

I set up pseudo hadoop/yarn cluster on my labtop.

I wrote a simple spark streaming program as below to receive messages with 
MQTTUtils.

conf = new SparkConf().setAppName("Monitor&Control");
jssc = new JavaStreamingContext(conf, Durations.seconds(1));
JavaReceiverInputDStream<String> inputDS = MQTTUtils.createStream(jssc, 
brokerUrl, topic);

inputDS.print();
jssc.start();
jssc.awaitTermination()


If I submitted the app with "--master local[2]", it works well.

spark-submit --master local[4] --driver-memory 4g --executor-memory 2g 
--num-executors 4 target/CollAna-1.0-SNAPSHOT.jar

If I submitted with "--master yarn",  no output for "inputDS.print()".

spark-submit --master yarn --deploy-mode cluster --driver-memory 4g 
--executor-memory 2g --num-executors 4 target/CollAna-1.0-SNAPSHOT.jar

Is it possible to launch spark application on yarn with only one single node?


Thanks for your advice.


Jared

Reply via email to