When we transitioned from 1.3 to 1.4, we ran into some class loader issues.
Though we weren’t using any sophisticated class loader helicopter stunts :)
Specifically…
1. Re-worked our pom.xml to set up shading to better mirror what the 1.4
example pom was doing.
2. Enabled child-first
That is complexity of the source code, this is easy to obtain, just fork it on
github and send it through sonarqube or codacy cloud. I am not sure if this is
not done already by the flink project. For my open source libraries
(hadoopcryptoledger and hadoopoffice) that also provide Flink modules
Yes, you are right.
But if I only focus to a statistical complexity of sources of Flink ? E.g.
number of libraries, functions/classes/methods, number and size of source files
and so on ?
How easily it is to get this information ?
Best, Esa
-Original Message-
From: Jörn Franke
I think this always depends. I found Flink more clean compared to other Big
Data platforms and with some experience it is rather easy to deploy.
However how do you measure complexity? How do you plan to cater for other
components (eg deploy in the cloud, deploy locally in a Hadoop cluster
Hi
I am writing a scientific article, that is related to deployment of Flink.
I would be very interesting to know, how to measure a complexity of
Flink platform or framework ?
Does anyone know a good articles about that ?
I think it is not always so simple to deploy and use..
Best, Esa
Hi,
SerializationSchema is a public interface that you can implement.
It has a single method to turn an object into a byte array.
I would suggest to implement your own SerializationSchema.
Best, Fabian
2018-04-11 15:56 GMT+02:00 Luigi Sgaglione :
> Hi,
>
> I'm