Have you seen these threads ?

http://search-hadoop.com/m/JW1q5tMFlb
http://search-hadoop.com/m/JW1q5dabji1

Cheers

On Mon, Jan 19, 2015 at 8:33 PM, Deep Pradhan <pradhandeep1...@gmail.com>
wrote:

> Hi Ted,
> When I am running the same job with small data, I am able to run. But when
> I run it with relatively bigger set of data, it is giving me
> OutOfMemoryError: GC overhead limit exceeded.
> The first time I run the job, no output. When I run for second time, I am
> getting this error. I am aware that, the memory is getting full, but is
> there any way to avoid this?
> I have a single node Spark cluster.
>
> Thank You
>
> On Tue, Jan 20, 2015 at 9:52 AM, Deep Pradhan <pradhandeep1...@gmail.com>
> wrote:
>
>> I had the Spark Shell running through out. Is it because of that?
>>
>> On Tue, Jan 20, 2015 at 9:47 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>>
>>> Was there another instance of Spark running on the same machine ?
>>>
>>> Can you pastebin the full stack trace ?
>>>
>>> Cheers
>>>
>>> On Mon, Jan 19, 2015 at 8:11 PM, Deep Pradhan <pradhandeep1...@gmail.com
>>> > wrote:
>>>
>>>> Hi,
>>>> I am running a Spark job. I get the output correctly but when I see the
>>>> logs file I see the following:
>>>> AbstractLifeCycle: FAILED.....: java.net.BindException: Address already
>>>> in use...
>>>>
>>>> What could be the reason for this?
>>>>
>>>> Thank You
>>>>
>>>
>>>
>>
>

Reply via email to