[ 
https://issues.apache.org/jira/browse/HDFS-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17700251#comment-17700251
 ] 

ASF GitHub Bot commented on HDFS-16942:
---------------------------------------

ayushtkn commented on PR #5460:
URL: https://github.com/apache/hadoop/pull/5460#issuecomment-1468287415

   This wasn't the fix: 
https://github.com/apache/hadoop/pull/5460#issuecomment-1463012342, rather it 
broke the javadoc build.
   Shoot a mvn clean site and find an exception like
   ```
   [ERROR] 
/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/package-info.java:22:
 error: package org.apache.hadoop.hdfs.server.protocol has already been 
annotated
   [ERROR] @InterfaceAudience.Private
   [ERROR] ^
   [ERROR] java.lang.AssertionError
   [ERROR]      at com.sun.tools.javac.util.Assert.error(Assert.java:126)
   [ERROR]      at com.sun.tools.javac.util.Assert.check(Assert.java:45)
   [ERROR]      at 
com.sun.tools.javac.code.SymbolMetadata.setDeclarationAttributesWithCompletion(SymbolMetadata.java:177)
   [ERROR]      at 
com.sun.tools.javac.code.Symbol.setDeclarationAttributesWithCompletion(Symbol.java:215)
   [ERROR]      at 
com.sun.tools.javac.comp.MemberEnter.actualEnterAnnotations(MemberEnter.java:952)
   [ERROR]      at 
com.sun.tools.javac.comp.MemberEnter.access$600(MemberEnter.java:64)
   [ERROR]      at 
com.sun.tools.javac.comp.MemberEnter$5.run(MemberEnter.java:876)
   [ERROR]      at com.sun.tools.javac.comp.Annotate.flush(Annotate.java:143)
   [ERROR]      at 
com.sun.tools.javac.comp.Annotate.enterDone(Annotate.java:129)
   [ERROR]      at com.sun.tools.javac.comp.Enter.complete(Enter.java:512)
   [ERROR]      at com.sun.tools.javac.comp.Enter.main(Enter.java:471)
   [ERROR]      at com.sun.tools.javadoc.JavadocEnter.main(JavadocEnter.java:78)
   [ERROR]      at 
com.sun.tools.javadoc.JavadocTool.getRootDocImpl(JavadocTool.java:186)
   ```
   
   Now the original problem
   That @sodonnel mentioned over here 
https://github.com/apache/hadoop/pull/5460#issuecomment-1458896438
   That the enforcer is giving an exception once he adds package-info.java
   
   ```
     Duplicate classes:
       org/apache/hadoop/hdfs/server/protocol/package-info.class
   ```
   And the fix went in the direction this enforcer has gone crazy, lets filter 
this file itself, but that poor fellow wasn't doing anything wrong :) 
   
   Check the file-1
   
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/package-info.java
   
   Now your added File-2
   
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/package-info.java
   
   One was already there in hdfs-client, now this got added for the same 
package on hdfs as well. Why same package on client and hdfs jar, I think all 
of us here know those reasons, so not getting into that...
   
   @sodonnel can you delete this new package-info.java file. And we can fix the 
build post writing "Checkstyle warnings is irrelevant/unavoidable"




> Send error to datanode if FBR is rejected due to bad lease
> ----------------------------------------------------------
>
>                 Key: HDFS-16942
>                 URL: https://issues.apache.org/jira/browse/HDFS-16942
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, namenode
>            Reporter: Stephen O'Donnell
>            Assignee: Stephen O'Donnell
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0, 3.2.5, 3.3.6
>
>
> When a datanode sends a FBR to the namenode, it requires a lease to send it. 
> On a couple of busy clusters, we have seen an issue where the DN is somehow 
> delayed in sending the FBR after requesting the least. Then the NN rejects 
> the FBR and logs a message to that effect, but from the Datanodes point of 
> view, it thinks the report was successful and does not try to send another 
> report until the 6 hour default interval has passed.
> If this happens to a few DNs, there can be missing and under replicated 
> blocks, further adding to the cluster load. Even worse, I have see the DNs 
> join the cluster with zero blocks, so it is not obvious the under replication 
> is caused by lost a FBR, as all DNs appear to be up and running.
> I believe we should propagate an error back to the DN if the FBR is rejected, 
> that way, the DN can request a new lease and try again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to