Where do I do that ?
Thanks
Sent from my iPhone
> On Jun 27, 2015, at 8:59 PM, Sabarish Sasidharan
> wrote:
>
> Try setting the yarn executor memory overhead to a higher value like 1g or
> 1.5g or more.
>
> Regards
> Sab
>
>> On 28-Jun-2015 9:22 am, "Ayman Farahat" wrote:
>> That's corre
Try setting the yarn executor memory overhead to a higher value like 1g or
1.5g or more.
Regards
Sab
On 28-Jun-2015 9:22 am, "Ayman Farahat" wrote:
> That's correct this is Yarn
> And spark 1.4
> Also using the Anaconda tar for Numpy and other Libs
>
>
> Sent from my iPhone
>
> On Jun 27, 2015,
That's correct this is Yarn
And spark 1.4
Also using the Anaconda tar for Numpy and other Libs
Sent from my iPhone
> On Jun 27, 2015, at 8:50 PM, Sabarish Sasidharan
> wrote:
>
> Are you running on top of YARN? Plus pls provide your infrastructure details.
>
> Regards
> Sab
>
>> On 28-Jun-2
Are you running on top of YARN? Plus pls provide your infrastructure
details.
Regards
Sab
On 28-Jun-2015 9:20 am, "Sabarish Sasidharan" <
sabarish.sasidha...@manthan.com> wrote:
> Are you running on top of YARN? Plus pls provide your infrastructure
> details.
>
> Regards
> Sab
> On 28-Jun-2015 8:
Are you running on top of YARN? Plus pls provide your infrastructure
details.
Regards
Sab
On 28-Jun-2015 8:47 am, "Ayman Farahat"
wrote:
> Hello;
> I tried to adjust the number of blocks by repartitioning the input.
> Here is How I do it; (I am partitioning by users )
>
> tot = newrdd.map(lambd
Hello;
I tried to adjust the number of blocks by repartitioning the input.
Here is How I do it; (I am partitioning by users )
tot = newrdd.map(lambda l:
(l[1],Rating(int(l[1]),int(l[2]),l[4]))).partitionBy(50).cache()
ratings = tot.values()
numIterations =8
rank = 80
model = ALS.trainImplicit(