Hello,

I have a web application which works on a Hadoop cluster in the background. The 
application is based on Spring Boot, and is packaged via 
spring-boot-maven-plugin. This plugin works similar to the 
maven-assembly-plugin as it puts the dependencies as jars into the final output 
jars. For ordinary Hadoop MapReduce jobs, I add them as dependencies to my 
application so they are included in the final jar.
I now create a new Hadoop Configuration (simply via new Configuration()) and 
add all Hadoop configuration XML files for my cluster as resources to it 
(conf.addResource()), and additionally, I set "fs.hdfs.impl" to 
DistributedFileSystem.class.
With this Configuration, I can access the HDFS and submit MapReduce jobs from 
my web app just fine.

How do I achieve a similar behaviour with Flink?

Reply via email to