Thank You Priyanka,

we are opening more number files in our app. can u please let me know , how
to find the current limit of no of open files in cluster.
so that we will update.



On Mon, Oct 17, 2016 at 11:13 AM, Priyanka Gugale <pri...@apache.org> wrote:

> Looks like you are reaching the limit set for max open file handles in
> your system.
> Is your application opening lot of files at same time? If you expect your
> application to open lot many files at same time, you can increase max open
> files limit in hadoop by changing some settings. (Check command " ulimit -n
> <limit>" to update your open files handles limit.)
> If your application is not supposed to open many files at same time,
> please check why so many file handles are open at same time.
>
> -Priyanka
>
> On Fri, Oct 14, 2016 at 6:58 PM, chiranjeevi vasupilli <
> chiru....@gmail.com> wrote:
>
>>
>> Hi Team,
>>
>> Can you please let me know the reason, when we will get this kind of
>> exception. In my application containers getting killled with below
>> excpetion.
>>
>>
>> java.lang.RuntimeException: java.io.IOException: All datanodes
>> DatanodeInfoWithStorage[147.22.192.229:1004,DS-fa5a41a5-c953-477e-a5d9-3dc499d5baee,DISK]
>> are bad. Aborting...
>> at com.datatorrent.lib.io.fs.AbstractFileOutputOperator.endWind
>> ow(AbstractFileOutputOperator.java:948)
>> at com.datatorrent.stram.engine.GenericNode.processEndWindow(Ge
>> nericNode.java:141)
>>
>>
>> --
>> ur's
>> chiru
>>
>
>


-- 
ur's
chiru

Reply via email to