nFitInMemory() returns false.
> > >>>> *
> > >>>> My two cents.
> > >>>>
> > >>>> On Tue, Mar 9, 2010 at 11:51 PM, Christopher Douglas
> > >>>> wrote:
> > >>>>
> > >>>> That section of cod
, Christopher Douglas
> >>>> wrote:
> >>>>
> >>>> That section of code is unmodified in MR-1182. See the patches/svn
> log.
> >>>>> -C
> >>>>>
> >>>>> Sent from my iPhone
> >>>>>
> &
;
>>>>> On Mar 9, 2010, at 7:44 PM, "Ted Yu" wrote:
>>>>>
>>>>> I just downloaded hadoop-0.20.2 tar ball from cloudera mirror.
>>>>>
>>>>>> This is what I see in ReduceTask (line 999):
>>>>>>
gt;>>>> // Wait till the request can be fulfilled...
>>>>> while ((size + requestedSize) > maxSize) {
>>>>>
>>>>> I don't see the fix from MR-1182.
>>>>>
>>>>> That's why I suggested to
>>>> That's why I suggested to Andy that he manually apply MR-1182.
>>>>
>>>> Cheers
>>>>
>>>> On Tue, Mar 9, 2010 at 5:01 PM, Andy Sautins <
>>>> andy.saut...@returnpath.net
>>>>
>>>>&
es
set to 1
and continue to have the same Java heap space error.
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Tuesday, March 09, 2010 12:56 PM
To: common-user@hadoop.apache.org
Subject: Re: Shuffle In Memory OutOfMemoryError
This issue has been resolved in
http://issu
t;>>>
>>>>
>>>>> Thanks Ted. My understanding is that MAPREDUCE-1182 is included
>>>>> in the
>>>>> 0.20.2 release. We upgraded our cluster to 0.20.2 this weekend and
>>>>> re-ran
>>>>> the same job
9, 2010 5:19 PM
To: common-user@hadoop.apache.org
Subject: Re: Shuffle In Memory OutOfMemoryError
No, MR-1182 is included in 0.20.2
What heap size have you set for your reduce tasks? -C
Sent from my iPhone
On Mar 9, 2010, at 2:34 PM, "Ted Yu" wrote:
Andy:
You need to manually app
is configured to be 640M (
> mapred.child.java.opts set to -Xmx640m ).
>
> Andy
>
> -Original Message-
> From: Christopher Douglas [mailto:chri...@yahoo-inc.com]
> Sent: Tuesday, March 09, 2010 5:19 PM
> To: common-user@hadoop.apache.org
> Subject: Re: Shuffle In Memory OutOfMemoryError
Subject: Re: Shuffle In Memory OutOfMemoryError
No, MR-1182 is included in 0.20.2
What heap size have you set for your reduce tasks? -C
Sent from my iPhone
On Mar 9, 2010, at 2:34 PM, "Ted Yu" wrote:
> Andy:
> You need to manually apply the patch.
>
> Cheers
>
> On
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Tuesday, March 09, 2010 12:56 PM
To: common-user@hadoop.apache.org
Subject: Re: Shuffle In Memory OutOfMemoryError
This issue has been resolved in
http://issues.apache.org/jira/browse/MAPREDUCE-1182
Please apply the patch
M1182-1v20.patch<
http://i
FrrkcyriivlkfkjlkcuhgoyjturopEihymbgfkieNjl$jk
Simon
Simon
Matilda
Boris
lltjhhwgh
Sent via BlackBerry from T-Mobile
-Original Message-
From: Ted Yu
Date: Tue, 9 Mar 2010 14:33:28
To:
Subject: Re: Shuffle In Memory OutOfMemoryError
Andy:
You need to manually apply the patch
the release?
Thanks for your help.
Andy
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Tuesday, March 09, 2010 3:33 PM
To: common-user@hadoop.apache.org
Subject: Re: Shuffle In Memory OutOfMemoryError
Andy:
You need to manually apply the patch.
Cheers
On Tue
ke something else might be going on, yes?
> I
> > need to do some more research.
> >
> > I appreciate your insights.
> >
> > Andy
> >
> > -Original Message-
> > From: Ted Yu [mailto:yuzhih...@gmail.com]
> > Sent: Sunday, March 07, 201
.
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Tuesday, March 09, 2010 12:56 PM
To: common-user@hadoop.apache.org
Subject: Re: Shuffle In Memory OutOfMemoryError
This issue has been resolved in
http://issues.apache.org/jira/browse/MAPREDUCE-1182
Please apply the patch
M1182
ReduceTask.java:1195)
>
>
> So from that it does seem like something else might be going on, yes? I
> need to do some more research.
>
> I appreciate your insights.
>
> Andy
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Sunda
it does seem like something else might be going on, yes? I
> need to do some more research.
>
> I appreciate your insights.
>
> Andy
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Sunday, March 07, 2010 3:38 PM
> To: common-user@h
em.
>
> Thanks again for your thoughts.
>
> Andy
>
>
> -Original Message-
> From: Jacob R Rideout [mailto:apa...@jacobrideout.net]
> Sent: Sunday, March 07, 2010 1:21 PM
> To: common-user@hadoop.apache.org
> Cc: Andy Sautins; Ted Yu
> Subject
again for your thoughts.
>
> Andy
>
>
> -Original Message-
> From: Jacob R Rideout [mailto:apa...@jacobrideout.net]
> Sent: Sunday, March 07, 2010 1:21 PM
> To: common-user@hadoop.apache.org
> Cc: Andy Sautins; Ted Yu
> Subject: Re: Shuffle In Memory OutOfMemoryError
problem.
Thanks again for your thoughts.
Andy
-Original Message-
From: Jacob R Rideout [mailto:apa...@jacobrideout.net]
Sent: Sunday, March 07, 2010 1:21 PM
To: common-user@hadoop.apache.org
Cc: Andy Sautins; Ted Yu
Subject: Re: Shuffle In Memory OutOfMemoryError
Ted,
Thank you. I
Ted,
Thank you. I filled MAPREDUCE-1571 to cover this issue. I might have
some time to write a patch later this week.
Jacob Rideout
On Sat, Mar 6, 2010 at 11:37 PM, Ted Yu wrote:
> I think there is mismatch (in ReduceTask.java) between:
> this.numCopiers = conf.getInt("mapred.reduce.parall
I think there is mismatch (in ReduceTask.java) between:
this.numCopiers = conf.getInt("mapred.reduce.parallel.copies", 5);
and:
maxSingleShuffleLimit = (long)(maxSize *
MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION);
where MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION is 0.25f
because
copiers = ne
22 matches
Mail list logo