simonbence commented on a change in pull request #5437: URL: https://github.com/apache/nifi/pull/5437#discussion_r732935309
########## File path: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/test/java/org/apache/nifi/processors/hadoop/TestFetchHDFS.java ########## @@ -59,7 +59,25 @@ public void setup() { @Test public void testFetchStaticFileThatExists() throws IOException { final String file = "src/test/resources/testdata/randombytes-1"; - runner.setProperty(FetchHDFS.FILENAME, file); + final String fileWithMultipliedSeparators = "src/test////resources//testdata/randombytes-1"; + runner.setProperty(FetchHDFS.FILENAME, fileWithMultipliedSeparators); + runner.enqueue(new String("trigger flow file")); + runner.run(); + runner.assertAllFlowFilesTransferred(FetchHDFS.REL_SUCCESS, 1); + final List<ProvenanceEventRecord> provenanceEvents = runner.getProvenanceEvents(); + assertEquals(1, provenanceEvents.size()); + final ProvenanceEventRecord fetchEvent = provenanceEvents.get(0); + assertEquals(ProvenanceEventType.FETCH, fetchEvent.getEventType()); + // If it runs with a real HDFS, the protocol will be "hdfs://", but with a local filesystem, just assert the filename. + assertTrue(fetchEvent.getTransitUri().endsWith(file)); + } + + @Test + public void testFetchStaticFileThatExistsWithAbsolutePath() throws IOException { Review comment: The processor is generally intended to work with HDFS (and services such as S3 which might be behind the HDFS api), and these are following the unix format. As of this the other tests (and the prod code) is prepared to work with "/". Unluckily for testing purposes we need to work with the local file system. Which in case of the NiFi works in a Windows environment (what causes the given check's failure) uses the "\" as separator. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org