If you want to really learn then I recommend you to start with a flink project 
that contains unit tests and integration tests (maybe augmented with 
https://wiki.apache.org/hadoop/HowToDevelopUnitTests to simulate a HDFS cluster 
during unit tests). It should also include coverage reporting. These aspects 
are equally crucial to know for developers to develop high quality big data 
applications and virtually all companies will require that you know these 
things. 

I am not sure if a hello world project in Flink exists containing all these but 
it would be a good learning task to create such a thing.

> On 29. Nov 2017, at 22:03, Georg Heiler <georg.kf.hei...@gmail.com> wrote:
> 
> Getting started with Flink / scala, I wonder whether the scala base library 
> should be excluded as a best practice:
> https://github.com/tillrohrmann/flink-project/blob/master/build.sbt#L32 
> // exclude Scala library from assembly
> assemblyOption in assembly := (assemblyOption in 
> assembly).value.copy(includeScala = false)
> 
> Also I would like to know if https://github.com/tillrohrmann/flink-project is 
> the most up to date getting started with flink-scala sample project you would 
> recommend.
> 
> Best,
> Georg

Reply via email to