azagrebin commented on a change in pull request #11245:
URL: https://github.com/apache/flink/pull/11245#discussion_r440857707



##########
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java
##########
@@ -137,11 +138,17 @@
                .withDescription("The cluster-id, which should be no more than 
45 characters, is used for identifying " +
                        "a unique Flink cluster. If not set, the client will 
automatically generate it with a random ID.");
 
+       // The default container image that ties to the exact needed versions 
of both Flink and Scala.
+       public static final String DEFAULT_CONTAINER_IMAGE = "flink:" + 
EnvironmentInformation.getVersion() + "-scala_" + 
EnvironmentInformation.getScalaVersion();
+
        public static final ConfigOption<String> CONTAINER_IMAGE =
                key("kubernetes.container.image")
                .stringType()
-               .defaultValue("flink:latest")
-               .withDescription("Image to use for Flink containers.");
+               .defaultValue(DEFAULT_CONTAINER_IMAGE)
+               .withDescription("Image to use for Flink containers. " +
+                       "The specified image MUST be based upon Apache Flink " +
+                       EnvironmentInformation.getVersion() + " that uses Scala 
" + EnvironmentInformation.getScalaVersion() + " . " +

Review comment:
       This is a good point. The default value, generated like this, does not 
look like making sense then. I think the ideal solution would be to put a 
default value which includes only released Flink version or `latest` for 
snapshot, both without Scala version, mostly for documentation purposes.
   
   The actual image to use in runtime, if this option is not explicitly 
configured by user and what we can document, can be then the current full image 
tag: Flink and Scala version which were used to build the running Flink. This 
requires modification of the code which parses the option value.
   
   Ideally, we should also calculate the concrete latest Flink (e.g. 1.10) for 
snapshot as we do not have docker image for snapshot version (e.g. 
1.11-snapshot). This is to combine it with the concrete Scala version, compiled 
against. For simplicity, we could ignore Scala version for snapshots for now 
and just use `latest`, like in docs, as it is mostly for development where we 
could ensure correct Scala version for testing.

##########
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java
##########
@@ -137,11 +138,17 @@
                .withDescription("The cluster-id, which should be no more than 
45 characters, is used for identifying " +
                        "a unique Flink cluster. If not set, the client will 
automatically generate it with a random ID.");
 
+       // The default container image that ties to the exact needed versions 
of both Flink and Scala.
+       public static final String DEFAULT_CONTAINER_IMAGE = "flink:" + 
EnvironmentInformation.getVersion() + "-scala_" + 
EnvironmentInformation.getScalaVersion();
+
        public static final ConfigOption<String> CONTAINER_IMAGE =
                key("kubernetes.container.image")
                .stringType()
-               .defaultValue("flink:latest")
-               .withDescription("Image to use for Flink containers.");
+               .defaultValue(DEFAULT_CONTAINER_IMAGE)
+               .withDescription("Image to use for Flink containers. " +
+                       "The specified image MUST be based upon Apache Flink " +
+                       EnvironmentInformation.getVersion() + " that uses Scala 
" + EnvironmentInformation.getScalaVersion() + " . " +

Review comment:
       If you had problems with Flink/Scala version mismatch, maybe the problem 
is actually more general. The solution could be to embed some explicit runtime 
check of the versions used to build the user code and the cluster where the 
user tries to submit the job to. This way we could fail fast with a descriptive 
mismatch message or at least log the mismatch. I am not aware whether we have 
something like this. This also sounds like a separate effort.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to