[ 
https://issues.apache.org/jira/browse/HADOOP-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johan Oskarsson updated HADOOP-3232:
------------------------------------

    Attachment: du-nonblocking-v5-trunk.patch

1. Changed
2. Removed the reference to DU and call DU.this.run() instead.

Scheduling it with an Executor would still have a running thread that keeps 
track of the scheduling, no? ScheduledThreadPoolExecutor for example.
The other example was starting a thread inside getUsed() then returning the old 
value while the DU runs? Wouldn't that cause problems where getUsed is called 
very infrequently? Perhaps this is never an issue?

Anyway, I added the comment about improving this with a non permanent thread 
anyway, in the most common case I guess that would work fine.

As mentioned this patch will cause one new findbugs error, starting a thread in 
the constructor, hard to avoid without having a new method to start it, 
breaking the public interface.

> Datanodes time out
> ------------------
>
>                 Key: HADOOP-3232
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3232
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.2, 0.16.3, 0.16.4
>         Environment: 10 node cluster + 1 namenode
>            Reporter: Johan Oskarsson
>            Priority: Critical
>             Fix For: 0.18.0
>
>         Attachments: du-nonblocking-v1.patch, du-nonblocking-v2-trunk.patch, 
> du-nonblocking-v4-trunk.patch, du-nonblocking-v5-trunk.patch, 
> hadoop-hadoop-datanode-new.log, hadoop-hadoop-datanode-new.out, 
> hadoop-hadoop-datanode.out, hadoop-hadoop-namenode-master2.out
>
>
> I recently upgraded to 0.16.2 from 0.15.2 on our 10 node cluster.
> Unfortunately we're seeing datanode timeout issues. In previous versions 
> we've often seen in the nn webui that one or two datanodes "last contact" 
> goes from the usual 0-3 sec to ~200-300 before it drops down to 0 again.
> This causes mild discomfort but the big problems appear when all nodes do 
> this at once, as happened a few times after the upgrade.
> It was suggested that this could be due to namenode garbage collection, but 
> looking at the gc log output it doesn't seem to be the case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to