That is complexity of the source code, this is easy to obtain, just fork it on 
github and send it through sonarqube or codacy cloud. I am not sure if this is 
not done already by the flink project. For my open source libraries 
(hadoopcryptoledger and hadoopoffice) that also provide Flink modules I do this.

> On 14. Apr 2018, at 14:17, Esa Heikkinen <esa.heikki...@student.tut.fi> wrote:
> 
> Yes, you are right.
> 
> But if I only focus to a statistical complexity of sources of Flink ? E.g. 
> number of libraries, functions/classes/methods, number and size of source 
> files and so on ?
> How easily it is to get this information ?
> 
> Best, Esa
> 
> -----Original Message-----
> From: Jörn Franke <jornfra...@gmail.com> 
> Sent: Saturday, April 14, 2018 1:43 PM
> To: Esa Heikkinen <esa.heikki...@student.tut.fi>
> Cc: user@flink.apache.org
> Subject: Re: Complexity of Flink
> 
> I think this always depends. I found Flink more clean compared to other Big 
> Data platforms and with some experience it is rather easy to deploy.
> 
> However how do you measure complexity? How do you plan to cater for  other 
> components (eg deploy in the cloud, deploy locally in a Hadoop cluster etc).
> Then how do you take into account experience of the team leader and people 
> deploying it, issues with unqualified external service providers, contracts 
> etc?
> 
> Those are the variables that you need to define and then validate (case study 
> and/or survey).
> 
>> On 14. Apr 2018, at 12:24, Esa Heikkinen <heikk...@student.tut.fi> wrote:
>> 
>> 
>> Hi
>> 
>> I am writing a scientific article, that is related to deployment of Flink.
>> 
>> I would be very interesting to know, how to measure a complexity of Flink 
>> platform or framework ?
>> 
>> Does anyone know a good articles about that ?
>> 
>> I think it is not always so simple to deploy and use..
>> 
>> Best, Esa
>> 
>> 

Reply via email to