>>>
>>>> On Tue, Jul 26, 2016 at 2:18 AM, Mail.com <pradeep.mi...@mail.com> wrote:
>>>> Hi All,
>>>>
>>>> I have a directory which has 12 files. I want to read the entire file so I
>>>> am reading it as wholeTextFiles(dirpat
com> wrote:
>>> Hi All,
>>>
>>> I have a directory which has 12 files. I want to read the entire file so I
>>> am reading it as wholeTextFiles(dirpath, numPartitions).
>>>
>>> I run spark-submit as --num-executors 12 --executor-cores
&
have a directory which has 12 files. I want to read the entire file so I
>> am reading it as wholeTextFiles(dirpath, numPartitions).
>>
>> I run spark-submit as --num-executors 12 --executor-cores
>> 1 and numPartitions 12.
>>
>> However, when I run the jo
il.com> wrote:
> Hi All,
>
> I have a directory which has 12 files. I want to read the entire file so I am
> reading it as wholeTextFiles(dirpath, numPartitions).
>
> I run spark-submit as --num-executors 12 --executor-cores 1
> and numPartitions 12.
>
> However, wh
Hi All,
I have a directory which has 12 files. I want to read the entire file so I am
reading it as wholeTextFiles(dirpath, numPartitions).
I run spark-submit as --num-executors 12 --executor-cores 1
and numPartitions 12.
However, when I run the job I see that the stage which reads
http://talebzadehmich.wordpress.com
On 15 May 2016 at 13:19, Mail.com <pradeep.mi...@mail.com> wrote:
> Hi ,
>
> I have seen multiple videos on spark tuning which shows how to determine #
> cores, #executors and memory size of the job.
>
> In all that I have seen, it seems each job has to be giv
8Pw
>
> http://talebzadehmich.wordpress.com
>
>
>> On 15 May 2016 at 13:19, Mail.com <pradeep.mi...@mail.com> wrote:
>> Hi ,
>>
>> I have seen multiple videos on spark tuning which shows how to determine #
>> cores, #executors and memory size of
On Sun, May 15, 2016 at 8:19 AM, Mail.com wrote:
> In all that I have seen, it seems each job has to be given the max resources
> allowed in the cluster.
Hi,
I'm fairly sure it was because FIFO scheduling mode was used. You
could change it to FAIR and make some
8Pw>*
http://talebzadehmich.wordpress.com
On 15 May 2016 at 13:19, Mail.com <pradeep.mi...@mail.com> wrote:
> Hi ,
>
> I have seen multiple videos on spark tuning which shows how to determine #
> cores, #executors and memory size of the job.
>
> In all that I ha
For the last question, have you looked at:
https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation
FYI
On Sun, May 15, 2016 at 5:19 AM, Mail.com <pradeep.mi...@mail.com> wrote:
> Hi ,
>
> I have seen multiple videos on spark tuning which shows how to dete
Hi ,
I have seen multiple videos on spark tuning which shows how to determine #
cores, #executors and memory size of the job.
In all that I have seen, it seems each job has to be given the max resources
allowed in the cluster.
How do we factor in input size as well? I am processing a 1gb
11 matches
Mail list logo