Hi,

I saw today, that all datanodes were alive, when I lost task-tracker.

for example: I lost slave1 as tasktracker, but as datanode is slave1 still
alive.

In addition, I tried to increaes my Java-heap size, because I have so many
Objects in my Application, that they simultaneously alive. But it was
useless too...

That is new Info for me. Maybe someone hat an Idea??
Regards

Baran
2011/3/25 baran cakici <barancak...@gmail.com>

> I am still waiting for some suggetions??
>
> thanks again...
>
> Baran
>
>   2011/3/16 baran cakici <barancak...@gmail.com>
>
>> ok...:)
>> another solution suggetions??
>>
>> 2011/3/16 Harsh J <qwertyman...@gmail.com>
>>
>>> Hello,
>>>
>>> On Thu, Mar 17, 2011 at 1:39 AM, baran cakici <barancak...@gmail.com>
>>> wrote:
>>> > @Harsh
>>> >
>>> > I start daemons with start-dfs.sh and then start-mapred-dfs.sh. do you
>>> mean
>>> > this Exception(org.apache.hadoop.ipc.RemoteException) is normal?
>>>
>>> Yes. It is additionally logged as INFO. This isn't a problem since NN
>>> needs to be up before JT can use it.
>>>
>>> --
>>> Harsh J
>>> http://harshj.com
>>>
>>
>>
>

Reply via email to