Thanx  alot !  But in client mode  Can we assign number of workers/nodes
 as a flag parameter to the spark-Submit command .
And by default how it will distribute the load across the nodes.

# Run on a Spark Standalone cluster in client deploy mode
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://207.184.161.138:7077 \
  --executor-memory 20G \
  --total-executor-cores 100 \
  /path/to/examples.jar \
  1000



On Sat, Jun 20, 2015 at 3:18 AM, Tathagata Das <t...@databricks.com> wrote:

> Depends on what cluster manager are you using. Its all pretty well
> documented in the online documentation.
>
> http://spark.apache.org/docs/latest/submitting-applications.html
>
> On Fri, Jun 19, 2015 at 2:29 PM, anshu shukla <anshushuk...@gmail.com>
> wrote:
>
>> Hey ,
>> *[For Client Mode]*
>>
>> 1- Is there any way to assign the number  of workers from a cluster
>>  should be used for particular application .
>>
>> 2- If not then how spark scheduler   decides scheduling of diif
>> applications inside one  full logic .
>>
>> say my logic have  {inputStream---->>
>> wordsplitter----->>wordcount---->>statistical analysis}
>>
>> then on how many workers it will be scheduled .
>>
>> --
>> Thanks & Regards,
>> Anshu Shukla
>> SERC-IISC
>>
>
>


-- 
Thanks & Regards,
Anshu Shukla

Reply via email to