1996fanrui commented on code in PR #677:
URL:
https://github.com/apache/flink-kubernetes-operator/pull/677#discussion_r1356017440
##########
flink-autoscaler/pom.xml:
##########
@@ -45,6 +45,32 @@ under the License.
<scope>provided</scope>
</dependency>
+ <dependency>
+ <groupId>org.projectlombok</groupId>
+ <artifactId>lombok</artifactId>
+ <version>${lombok.version}</version>
+ <scope>provided</scope>
+ </dependency>
+
+ <dependency>
+ <groupId>org.junit.jupiter</groupId>
+ <artifactId>junit-jupiter-params</artifactId>
+ <scope>test</scope>
+ </dependency>
+
+ <!-- TODO FLINK-33098: These jackson dependencies can be replaced with
flink shaded jackson. It can be done
Review Comment:
Thanks @XComp for the detailed analysis.
> Do we need to shade the jackson dependency?
IIUC, it's not required now. If it's required, `flink-kubernetes-operator`
must have completed the shade before.
I prefer to use the shaded version in the future, and the motivation as your
mention: we have custom auto-scaler implementations.
The flink shaded version should be enough, so I agree with @gyfora and @mxm
: let's version bundled with the flink client. And we can re-shade it if it
doesn't work well in the future. WDYT?
-----------------------------
The second question:
This PR still using the jackson version of `flink-kubernetes-operator`
instead of flink shaded version. I want to update it after `flink-1.18`
released.
The reason is : current autoscaler is using the `loaderOptions` to limit the
serialized size.
- The shaded jackson version of `flink-1.17` is `2.13.4-16.1`, it doesn't
support the `loaderOptions`.
- The shaded jackson version of `flink-1.18` is `2.14.2-17.0`, it supports
the `loaderOptions`.
What do you think about updating it after `flink-1.18` is released, it
should be released soon. If it makes sense, I can create a JIRA to follow it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]