[ 
https://issues.apache.org/jira/browse/HDFS-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14899931#comment-14899931
 ] 

Hadoop QA commented on HDFS-9046:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 37s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  2s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 30s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 186m 23s | Tests failed in hadoop-hdfs. |
| | | 228m 35s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761324/HDFS-9046_3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3a9c707 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12563/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12563/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12563/console |


This message was automatically generated.

> Any Error during BPOfferService run can leads to Missing DN.
> ------------------------------------------------------------
>
>                 Key: HDFS-9046
>                 URL: https://issues.apache.org/jira/browse/HDFS-9046
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: nijel
>            Assignee: nijel
>         Attachments: HDFS-9046_1.patch, HDFS-9046_2.patch, HDFS-9046_3.patch
>
>
> The cluster is ins HA mode and each DN having only one block pool.
> The issue is once after switch one DN is missing from the current active NN.
> Upon analysis I found that there is one exception in BPOfferService.run()
> {noformat}
> 2015-08-21 09:02:11,190 | WARN  | DataNode: 
> [[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
> [DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
> 160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
> pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
> 15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
> 160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
> java.lang.OutOfMemoryError: unable to create new native thread
>                 at java.lang.Thread.start0(Native Method)
>                 at java.lang.Thread.start(Thread.java:714)
>                 at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
>                 at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
>                 at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
>                 at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
>                 at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
>                 at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
>                 at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
>                 at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
>                 at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
>                 at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
>                 at java.lang.Thread.run(Thread.java:745)
> {noformat}
> After this particular BPOfferService is down during the run time.
> And this particular NN will not have the details of this DN
> Similar issues are discussed in the following JIRAs
> https://issues.apache.org/jira/browse/HDFS-2882
> https://issues.apache.org/jira/browse/HDFS-7714
> Can we retry in this case also with a larger interval instead of shutting 
> down this BPOfferService ?
> I think since this exceptions can occur randomly in DN it is not good to keep 
> the DN running where some NN does not have the info !



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to