[
https://issues.apache.org/jira/browse/FLINK-10955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16702113#comment-16702113
]
ASF GitHub Bot commented on FLINK-10955:
----------------------------------------
tillrohrmann closed pull request #7150: [FLINK-10955] Extend release notes for
Apache Flink 1.7.0
URL: https://github.com/apache/flink/pull/7150
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):
diff --git a/docs/release-notes/flink-1.7.md b/docs/release-notes/flink-1.7.md
index 8cdfe9d5400..ea2ae6f87e5 100644
--- a/docs/release-notes/flink-1.7.md
+++ b/docs/release-notes/flink-1.7.md
@@ -22,6 +22,97 @@ under the License.
These release notes discuss important aspects, such as configuration,
behavior, or dependencies, that changed between Flink 1.6 and Flink 1.7. Please
read these notes carefully if you are planning to upgrade your Flink version to
1.7.
+### Scala 2.12 support
+
+When using Scala `2.12` you might have to add explicit type annotations in
places where they were not required when using Scala `2.11`.
+This is an excerpt from the `TransitiveClosureNaive.scala` example in the
Flink code base that shows the changes that could be required.
+
+Previous code:
+```
+val terminate = prevPaths
+ .coGroup(nextPaths)
+ .where(0).equalTo(0) {
+ (prev, next, out: Collector[(Long, Long)]) => {
+ val prevPaths = prev.toSet
+ for (n <- next)
+ if (!prevPaths.contains(n)) out.collect(n)
+ }
+}
+```
+
+With Scala `2.12` you have to change it to:
+```
+val terminate = prevPaths
+ .coGroup(nextPaths)
+ .where(0).equalTo(0) {
+ (prev: Iterator[(Long, Long)], next: Iterator[(Long, Long)], out:
Collector[(Long, Long)]) => {
+ val prevPaths = prev.toSet
+ for (n <- next)
+ if (!prevPaths.contains(n)) out.collect(n)
+ }
+}
+```
+
+The reason for this is that Scala `2.12` changes how lambdas are implemented.
+They now use the lambda support using SAM interfaces introduced in Java 8.
+This makes some method calls ambiguous because now both Scala-style lambdas
and SAMs are candidates for methods were it was previously clear which method
would be invoked.
+
+### State evolution
+
+Before Flink 1.7, serializer snapshots were implemented as a
`TypeSerializerConfigSnapshot` (which is now deprecated, and will eventually be
removed in the future to be fully replaced by the new `TypeSerializerSnapshot`
interface introduced in 1.7).
+Moreover, the responsibility of serializer schema compatibility checks lived
within the `TypeSerializer`, implemented in the
`TypeSerializer#ensureCompatibility(TypeSerializerConfigSnapshot)` method.
+
+To be future-proof and to have flexibility to migrate your state serializers
and schema, it is highly recommended to migrate from the old abstractions.
+Details and migration guides can be found
[here](https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/state/custom_serialization.html).
+
+### Removal of the legacy mode
+
+Flink no longer supports the legacy mode.
+If you depend on this, then please use Flink `1.6.x`.
+
+### Savepoints being used for recovery
+
+Savepoints are now used while recovering.
+Previously when using exactly-once sink one could get into problems with
duplicate output data when a failure occurred after a savepoint was taken but
before the next checkpoint occurred.
+This results in the fact that savepoints are no longer exclusively under the
control of the user.
+Savepoint should not be moved nor deleted if there was no newer checkpoint or
savepoint taken.
+
+### MetricQueryService runs in separate thread pool
+
+The metric query service runs now in its own `ActorSystem`.
+It needs consequently to open a new port for the query services to communicate
with each other.
+The [query service
port]({{site.baseurl}}/ops/config.html#metrics-internal-query-service-port) can
be configured in `flink-conf.yaml`.
+
+### Granularity of latency metrics
+
+The default granularity for latency metrics has been modified.
+To restore the previous behavior users have to explicitly set the
[granularity]({{site.baseurl}}/ops/config.html#metrics-latency-granularity) to
`subtask`.
+
+### Latency marker activation
+
+Latency metrics are now disabled by default, which will affect all jobs that
do not explicitly set the `latencyTrackingInterval` via
`ExecutionConfig#setLatencyTrackingInterval`.
+To restore the previous default behavior users have to configure the [latency
interval]({{site.baseurl}}/ops/config.html#metrics-latency-interval) in
`flink-conf.yaml`.
+
+### Relocation of Hadoop's Netty dependency
+
+We now also relocate Hadoop's Netty dependency from `io.netty` to
`org.apache.flink.hadoop.shaded.io.netty`.
+You can now bundle your own version of Netty into your job but may no longer
assume that `io.netty` is present in the `flink-shaded-hadoop2-uber-*.jar` file.
+
+### Local recovery fixed
+
+With the improvements to Flink's scheduling, it can no longer happen that
recoveries require more slots than before if local recovery is enabled.
+Consequently, we encourage our users to enable [local
recovery]({{site.baseurl}}/ops/config.html#state-backend-local-recovery) in
`flink-conf.yaml`.
+
+### Support for multi slot TaskManagers
+
+Flink now properly supports `TaskManagers` with multiple slots.
+Consequently, `TaskManagers` can now be started with an arbitrary number of
slots and it is no longer recommended to start them with a single slot.
+
+### StandaloneJobClusterEntrypoint generates JobGraph with fixed JobID
+
+The `StandaloneJobClusterEntrypoint`, which is launched by the script
`standalone-job.sh` and used for the job-mode container images, now starts all
jobs with a fixed `JobID`.
+Thus, in order to run a cluster in HA mode, one needs to set a different
[cluster id]({{site.baseurl}}/ops/config.html#high-availability-cluster-id) for
each job/cluster.
+
<!-- Should be removed once FLINK-10911 is fixed -->
### Scala shell does not work with Scala 2.12
@@ -37,6 +128,15 @@ You should only use this feature if you are executing a
stateless streaming job.
In any other cases, it is highly recommended to remove the config option
`jobmanager.execution.failover-strategy` from your `flink-conf.yaml` or set it
to `"full"`.
In order to avoid future problems, this feature has been removed from the
documentation until it will be fixed.
-See [FLINK-10880](https://issues.apache.org/jira/browse/FLINK-10880) for more
details.
+See [FLINK-10880](https://issues.apache.org/jira/browse/FLINK-10880) for more
details.
+
+### SQL over window preceding clause
+
+The over window `preceding` clause is now optional.
+It defaults to `UNBOUNDED` if not specified.
+
+### OperatorSnapshotUtil writes v2 snapshots
+
+Snapshots created with `OperatorSnapshotUtil` are now written in the savepoint
format `v2`.
{% top %}
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Extend release notes for Flink 1.7
> ----------------------------------
>
> Key: FLINK-10955
> URL: https://issues.apache.org/jira/browse/FLINK-10955
> Project: Flink
> Issue Type: Bug
> Components: Documentation
> Affects Versions: 1.7.0
> Reporter: Till Rohrmann
> Assignee: Till Rohrmann
> Priority: Critical
> Labels: pull-request-available
> Fix For: 1.7.1
>
>
> We should extend the release notes for Flink 1.7 to include the release notes
> of all fixed issues with fix version {{1.7.0}}.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)