Github user jtstorck commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/2971#discussion_r216031690
  
    --- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
    @@ -269,13 +272,15 @@ public Object run() {
                             }
                             changeOwner(context, hdfs, configuredRootDirPath, 
flowFile);
                         } catch (IOException e) {
    -                        if (!Strings.isNullOrEmpty(e.getMessage()) && 
e.getMessage().contains(String.format("Couldn't setup connection for %s", 
ugi.getUserName()))) {
    -                          getLogger().error(String.format("An error 
occured while connecting to HDFS. Rolling back session, and penalizing flowfile 
%s",
    -                              
flowFile.getAttribute(CoreAttributes.UUID.key())));
    -                          session.rollback(true);
    -                        } else {
    -                          throw e;
    -                        }
    +                      boolean tgtExpired = hasCause(e, GSSException.class, 
gsse -> "Failed to find any Kerberos tgt".equals(gsse.getMinorString()));
    +                      if (tgtExpired) {
    +                        getLogger().error(String.format("An error occured 
while connecting to HDFS. Rolling back session, and penalizing flow file %s",
    +                            
putFlowFile.getAttribute(CoreAttributes.UUID.key())));
    +                        session.rollback(true);
    +                      } else {
    +                        getLogger().error("Failed to access HDFS due to 
{}", new Object[]{e});
    +                        session.transfer(session.penalize(putFlowFile), 
REL_FAILURE);
    --- End diff --
    
    @ekovacs I don't think we need to penalize on the transfer to failure here.


---

Reply via email to