Github user YolandaMDavis commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/843#discussion_r74612515
  
    --- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/AbstractHadoopProcessor.java
 ---
    @@ -286,8 +310,10 @@ HdfsResources resetHDFSResources(String 
configResources, String dir, ProcessCont
                     }
                 }
     
    +            final Path workingDir = fs.getWorkingDirectory();
                 getLogger().info("Initialized a new HDFS File System with 
working dir: {} default block size: {} default replication: {} config: {}",
    -                    new Object[] { fs.getWorkingDirectory(), 
fs.getDefaultBlockSize(new Path(dir)), fs.getDefaultReplication(new Path(dir)), 
config.toString() });
    +                    new Object[]{workingDir, 
fs.getDefaultBlockSize(workingDir), fs.getDefaultReplication(workingDir), 
config.toString()});
    --- End diff --
    
    Noted this on the Jira 
([NIFI-2553](https://issues.apache.org/jira/browse/NIFI-2553)) but wanted to 
mention here as well.  Understood on the use of working directory for block 
size, curiously intrigued why path is even required,given the 
getDefulatBlockSize's implementation (noted in Jira comments). I think very 
small risk of implementation change on hadoop size causing a real relevance for 
the path (such as block size settings on directory level). So something to keep 
back of mind perhaps is a need to check if dir exists first and if not then use 
default?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to