Community,

How can I increase the NUMBER OF EXECUTORS for my Streaming job Local ?

We have tried spark.master = local[4] but It is not starting 4 executors and 
our job keeps getting Queued - do we need to make a code change to increase 
number of executors?

This job - jar file read from Kafka Stream with 2 partitions and send data to 
Cassandra -

Please help advise - thanks again community

Here is How we start the job:

nohup spark-submit --properties-file /hadoop_common/airwaveApList.properties 
--class airwaveApList /hadoop_common/airwaveApList-1.0.jar

Properties file for the Steaming Job:

spark.cassandra.connection.host       cass_host
spark.cassandra.auth.username         cass_app
spark.cassandra.auth.password         xxxxx
spark.topic                           ap_list_spark_streaming
spark.app.name                        ap-status
spark.metadata.broker.list            server.corp.net:6667
spark.zookeeper.connect               server.net:2181
spark.group.id                        airwave_activation_status
spark.zookeeper.connection.timeout.ms 1000
spark.cassandra.sql.keyspace          enterprise
spark.master                          local[4]
spark.batch.size.seconds              120
spark.driver.memory                   12G
spark.executor.memory                 12G
spark.akka.frameSize                  512
spark.local.dir                       /prod/hadoop/spark/airwaveApList_temp
spark.history.kerberos.keytab none
spark.history.kerberos.principal none
spark.history.provider org.apache.spark.deploy.yarn.history.YarnHistoryProvider
spark.history.ui.port 18080
spark.yarn.historyServer.address has-dal-0001.corp.wayport.net:18080
spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService

Reply via email to