Hi.

Thank for the advice, just to clarify:
The upgrade of you speak of of cleaning the pipes/epolls more often, is
regarding the issue discussed (HADOOP-4346, fixed in my distribution), or
it's some other issue?

If yes, does it has a ticket I can see, or it should be filled to Jira?

Thanks!

2009/6/23 Brian Bockelman <bbock...@cse.unl.edu>

> Hey Stas,
>
> It sounds like it's technically possible, but it also sounds like a
> horrible hack: I'd avoid this at all expense.  This is how cruft is born.
>
> The pipes/epolls are something that eventually get cleaned up - but they
> don't get cleaned up often enough for your cluster.  I would recommend just
> increasing the limit on the node itself and then wait for an upgrade to
> "solve" this.
>
> Brian
>
>
> On Jun 23, 2009, at 3:31 AM, Stas Oskin wrote:
>
>  Hi.
>>
>> Any idea if calling System.gc() periodically will help reducing the amount
>> of pipes / epolls?
>>
>> Thanks for your opinion!
>>
>> 2009/6/22 Stas Oskin <stas.os...@gmail.com>
>>
>>  Ok, seems this issue is already patched in the Hadoop distro I'm using
>>> (Cloudera).
>>>
>>> Any idea if I still should call GC manually/periodically to clean out all
>>> the stale pipes / epolls?
>>>
>>> 2009/6/22 Steve Loughran <ste...@apache.org>
>>>
>>>  Stas Oskin wrote:
>>>>
>>>> Hi.
>>>>
>>>>>
>>>>> So what would be the recommended approach to pre-0.20.x series?
>>>>>
>>>>> To insure each file is used only by one thread, and then it safe to
>>>>> close
>>>>> the handle in that thread?
>>>>>
>>>>> Regards.
>>>>>
>>>>>
>>>> good question -I'm not sure. For anythiong you get with
>>>> FileSystem.get(),
>>>> its now dangerous to close, so try just setting the reference to null
>>>> and
>>>> hoping that GC will do the finalize() when needed
>>>>
>>>>
>>>
>

Reply via email to