ter mode.
>>>>
>>>> spark-submit --master yarn-cluster ...
>>>>
>>>> On Thu, Sep 10, 2015 at 7:50 PM, Raghavendra Pandey <
>>>> raghavendra.pan...@gmail.com> wrote:
>>>>
>>>>> What is the value of spar
means only one thread can run and that is why your job is stuck.
>>>> Specify it local[*], to make thread pool equal to number of cores...
>>>>
>>>> Raghav
>>>> On Sep 11, 2015 6:06 AM, "Atul Kulkarni"
>>>> wrote:
>>
ark master conf.. By default it is local, that
>>> means only one thread can run and that is why your job is stuck.
>>> Specify it local[*], to make thread pool equal to number of cores...
>>>
>>> Raghav
>>> On Sep 11, 2015 6:06 AM, "Atul Kulkarni&quo
t it is local, that
>> means only one thread can run and that is why your job is stuck.
>> Specify it local[*], to make thread pool equal to number of cores...
>>
>> Raghav
>> On Sep 11, 2015 6:06 AM, "Atul Kulkarni" wrote:
>>
>>> Hi Folks,
and that is why your job is stuck.
> Specify it local[*], to make thread pool equal to number of cores...
>
> Raghav
> On Sep 11, 2015 6:06 AM, "Atul Kulkarni" wrote:
>
>> Hi Folks,
>>
>> Below is the code have for Spark based Kafka Producer to take advan
; Below is the code have for Spark based Kafka Producer to take advantage
> of multiple executors reading files in parallel on my cluster but I am
> stuck at The program not making any progress.
>
> Below is my scrubbed code:
>
> val sparkConf = new SparkConf().setApp
Hi Folks,
Below is the code have for Spark based Kafka Producer to take advantage of
multiple executors reading files in parallel on my cluster but I am stuck
at The program not making any progress.
Below is my scrubbed code:
val sparkConf = new SparkConf().setAppName(applicationName)
val ssc