pnowojski commented on a change in pull request #12699:
URL: https://github.com/apache/flink/pull/12699#discussion_r445496733



##########
File path: docs/release-notes/flink-1.11.md
##########
@@ -0,0 +1,291 @@
+---
+title: "Release Notes - Flink 1.11"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read
+these notes carefully if you are planning to upgrade your Flink version to 
1.11.
+
+* This will be replaced by the TOC
+{:toc}
+
+### Clusters & Deployment
+#### Support for Hadoop 3.0.0 and higher 
([FLINK-11086](https://issues.apache.org/jira/browse/FLINK-11086))
+Flink project does not provide any updated "flink-shaded-hadoop-*" jars.
+Users need to provide Hadoop dependencies through the HADOOP_CLASSPATH 
environment variable (recommended) or via `lib/` folder.
+Also, the `include-hadoop` Maven profile has been removed.
+
+#### Removal of `LegacyScheduler` 
([FLINK-15629](https://issues.apache.org/jira/browse/FLINK-15629))
+Flink no longer supports the legacy scheduler. 
+Hence, setting `jobmanager.scheduler: legacy` will no longer work and fail 
with an `IllegalArgumentException`. 
+The only valid option for `jobmanager.scheduler` is the default value `ng`.
+
+#### Bind user code class loader to lifetime of a slot 
([FLINK-16408](https://issues.apache.org/jira/browse/FLINK-16408))
+The user code class loader is being reused by the `TaskExecutor` as long as 
there is at least a single slot allocated for the respective job. 
+This changes Flink's recovery behaviour slightly so that it will not reload 
static fields.
+The benefit is that this change drastically reduces pressure on the JVM's 
metaspace.
+
+#### Replaced `slave` file name with `workers` 
([FLINK-18307](https://issues.apache.org/jira/browse/FLINK-18307))
+For Standalone Setups, the file with the worker nodes is no longer called 
`slaves` but `workers`.
+Previous setups that use the `start-cluster.sh` and `stop-cluster.sh` scripts 
need to rename that file.
+
+#### Flink Docker Integration Improvements
+The examples of `Dockerfiles` and docker image `build.sh` scripts have been 
removed from [the Flink Github repository](https://github.com/apache/flink). 
The examples will no longer be maintained by community in the Flink Github 
repository, including the examples of integration with Bluemix. Therefore, the 
following modules have been deleted from the Flink Github repository:
+- `flink-contrib/docker-flink`
+- `flink-container/docker`
+- `flink-container/kubernetes`
+
+Check the updated user documentation for [Flink Docker 
integration](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html)
 instead. It now describes in detail how to 
[use](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#how-to-run-a-flink-image)
 and 
[customize](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#customize-flink-image)
 [the Flink official docker 
image](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#docker-hub-flink-images):
 configuration options, logging, plugins, adding more dependencies and 
installing software. The documentation also includes examples for Session and 
Job cluster deployments with:
+- [docker 
run](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#how-to-run-flink-image)
+- [docker 
compose](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#flink-with-docker-compose)
+- [docker 
swarm](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#flink-with-docker-swarm)
+- [standalone 
Kubernetes](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/kubernetes.html)
+
+### Memory Management
+#### New Flink Master Memory Model
+##### Overview
+With 
[FLIP-116](https://cwiki.apache.org/confluence/display/FLINK/FLIP-116%3A+Unified+Memory+Configuration+for+Job+Managers),
 a new memory model has been introduced for the Flink Master. New configuration 
options have been introduced to control the memory consumption of the Flink 
Master process. This affects all types of deployments: standalone, YARN, Mesos, 
and the new active Kubernetes integration.
+
+Please, check the user documentation for [more 
details](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_master.html).
+
+If you try to reuse your previous Flink configuration without any adjustments, 
the new memory model can result in differently computed memory parameters for 
the JVM and, thus, performance changes or even failures. See also [the 
migration 
guide](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration).
+
+##### Deprecation and breaking changes
+The following options are deprecated:
+ * `jobmanager.heap.size`
+ * `jobmanager.heap.mb`
+
+If these deprecated options are still used, they will be interpreted as one of 
the following new options in order to maintain backwards compatibility:
+ * [JVM 
Heap](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_master.html#configure-jvm-heap)
 
([`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-heap-size))
 for standalone and Mesos deployments
+ * [Total Process 
Memory](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_master.html#configure-total-memory)
 
([`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-process-size))
 for containerized deployments (Kubernetes and Yarn)
+
+The following options have been removed and have no effect anymore:
+ * `containerized.heap-cutoff-ratio`
+ * `containerized.heap-cutoff-min`
+
+There is [no container 
cut-off](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#container-cut-off-memory)
 anymore.
+
+##### JVM arguments
+The `direct` and `metaspace` memory of the Flink Master's JVM process are now 
limited by configurable values:
+ * 
[`jobmanager.memory.off-heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size)
+ * 
[`jobmanager.memory.jvm-metaspace.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-jvm-metaspace-size)
+
+See also [JVM 
Parameters](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup.html#jvm-parameters).
+
+<span class="label label-warning">Attention</span> These new limits can 
produce the respective `OutOfMemoryError` exceptions if they are not configured 
properly or there is a respective memory leak. See also [the troubleshooting 
guide](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_trouble.html#outofmemoryerror-direct-buffer-memory).
+
+#### Removal of deprecated mesos.resourcemanager.tasks.mem 
([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198))
+The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of 
`taskmanager.memory.process.size`, has been completely removed and will have no 
effect anymore in 1.11+.
+
+### Table API & SQL
+#### Blink is now the default planner 
([FLINK-16934](https://issues.apache.org/jira/browse/FLINK-16934))
+The default table planner has been changed to blink.
+
+#### Changed package structure for Table API 
([FLINK-15947](https://issues.apache.org/jira/browse/FLINK-15947))
+Due to various issues with packages `org.apache.flink.table.api.scala/java` 
all classes from those packages were relocated. 
+Moreover the scala expressions were moved to `org.apache.flink.table.api` as 
announced in Flink 1.9.
+
+If you used one of:
+* `org.apache.flink.table.api.java.StreamTableEnvironment`
+* `org.apache.flink.table.api.scala.StreamTableEnvironment`
+* `org.apache.flink.table.api.java.BatchTableEnvironment`
+* `org.apache.flink.table.api.scala.BatchTableEnvironment` 
+
+And you do not convert to/from DataStream, switch to:
+* `org.apache.flink.table.api.TableEnvironment` 
+
+If you do convert to/from DataStream/DataSet, change your imports to one of:
+* `org.apache.flink.table.api.bridge.java.StreamTableEnvironment`
+* `org.apache.flink.table.api.bridge.scala.StreamTableEnvironment`
+* `org.apache.flink.table.api.bridge.java.BatchTableEnvironment`
+* `org.apache.flink.table.api.bridge.scala.BatchTableEnvironment` 
+
+For the Scala expressions use the import:
+* `org.apache.flink.table.api._` instead of 
`org.apache.flink.table.api.bridge.scala._` 
+
+Additionally, if you use Scala's implicit conversions to/from 
DataStream/DataSet, import `org.apache.flink.table.api.bridge.scala._` instead 
of `org.apache.flink.table.api.scala._`
+
+#### Removal of deprecated `StreamTableSink` 
([FLINK-16362](https://issues.apache.org/jira/browse/FLINK-16362))
+The existing `StreamTableSink` implementations should remove `emitDataStream` 
method.
+
+#### Removal of `BatchTableSink#emitDataSet` 
([FLINK-16535](https://issues.apache.org/jira/browse/FLINK-16535))
+The existing `BatchTableSink` implementations should rename `emitDataSet` to 
`consumeDataSet` and return `DataSink`.
+  
+#### Corrected execution behavior of TableEnvironment.execute() and 
StreamTableEnvironment.execute() 
([FLINK-16363](https://issues.apache.org/jira/browse/FLINK-16363))
+In previous versions, `TableEnvironment.execute()` and 
`StreamExecutionEnvironment.execute()` can both trigger table and DataStream 
programs.
+Since Flink 1.11.0, table programs can only be triggered by 
`TableEnvironment.execute()`. 
+Once table program is converted into DataStream program (through 
`toAppendStream()` or `toRetractStream()` method), it can only be triggered by 
`StreamExecutionEnvironment.execute()`.
+
+#### Corrected execution behavior of ExecutionEnvironment.execute() and 
BatchTableEnvironment.execute() 
([FLINK-17126](https://issues.apache.org/jira/browse/FLINK-17126))
+In previous versions, `BatchTableEnvironment.execute()` and 
`ExecutionEnvironment.execute()` can both trigger table and DataSet programs 
for legacy batch planner.
+Since Flink 1.11.0, batch table programs can only be triggered by 
`BatchEnvironment.execute()`.
+Once table program is converted into DataSet program (through `toDataSet()` 
method), it can only be triggered by `ExecutionEnvironment.execute()`.
+
+#### Added a changeflag to Row type 
([FLINK-16998](https://issues.apache.org/jira/browse/FLINK-16998))
+An additional change flag called `RowKind` was added to the `Row` type.
+This changed the serialization format and will trigger a state migration.
+
+### Configuration
+#### Renamed log4j-yarn-session.properties and logback-yarn.xml properties 
files ([FLINK-17527](https://issues.apache.org/jira/browse/FLINK-17527))
+The logging properties files `log4j-yarn-session.properties` and 
`logback-yarn.xml` have been renamed to `log4j-session.properties` and 
`logback-session.xml`.
+Moreover, `yarn-session.sh` and `kubernetes-session.sh` use these logging 
properties files.
+
+### State
+#### Removal of deprecated background cleanup toggle (State TTL) 
([FLINK-15620](https://issues.apache.org/jira/browse/FLINK-15620))
+The `StateTtlConfig#cleanupInBackground` has been removed, because the method 
was deprecated and the background TTL was enabled by default in 1.10.
+
+####  Removal of deprecated option to disable TTL compaction filter 
([FLINK-15621](https://issues.apache.org/jira/browse/FLINK-15621))
+The TTL compaction filter in RocksDB has been enabled in 1.10 by default and 
it is now always enabled in 1.11+.
+Because of that the following option and methods have been removed in 1.11: 
+- `state.backend.rocksdb.ttl.compaction.filter.enabled`
+- `StateTtlConfig#cleanupInRocksdbCompactFilter()`
+- `RocksDBStateBackend#isTtlCompactionFilterEnabled`
+- `RocksDBStateBackend#enableTtlCompactionFilter`
+- `RocksDBStateBackend#disableTtlCompactionFilter`
+- (state_backend.py) `is_ttl_compaction_filter_enabled`
+- (state_backend.py) `enable_ttl_compaction_filter`
+- (state_backend.py) `disable_ttl_compaction_filter`
+
+#### Changed argument type of StateBackendFactory#createFromConfig 
([FLINK-16913](https://issues.apache.org/jira/browse/FLINK-16913))
+Starting from Flink 1.11 the `StateBackendFactory#createFromConfig` interface 
now takes `ReadableConfig` instead of `Configuration`.
+A `Configuration` class is still a valid argument to that method, as it 
implements the ReadableConfig interface.
+Implementors of custom `StateBackend` should adjust their implementations.
+
+#### Removal of deprecated OptionsFactory and ConfigurableOptionsFactory 
classes ([FLINK-18242](https://issues.apache.org/jira/browse/FLINK-18242))
+The deprecated `OptionsFactory` and `ConfigurableOptionsFactory` classes have 
been removed.
+Please use `RocksDBOptionsFactory` and `ConfigurableRocksDBOptionsFactory` 
instead.
+Please also recompile your application codes if any class extends 
`DefaultConfigurableOptionsFactory`.
+
+#### Enabled by default setTotalOrderSeek 
([FLINK-17800](https://issues.apache.org/jira/browse/FLINK-17800))
+Since Flink-1.11 the option `setTotalOrderSeek` will be enabled by default for 
RocksDB's `ReadOptions`.
+This is in order to prevent user from miss using `optimizeForPointLookup`.
+For backward compatibility we support customizing `ReadOptions` through 
`RocksDBOptionsFactory`.
+Please set `setTotalOrderSeek` back to false if any performance regression 
observed (it shouldn't happen according to our testing).
+
+#### Increased default size of `state.backend.fs.memory-threshold` 
([FLINK-17865](https://issues.apache.org/jira/browse/FLINK-17865))
+The default value of `state.backend.fs.memory-threshold` has been increased 
from 1K to 20K to prevent too many small files created on remote FS for small 
states.
+Jobs with large parallelism on source or stateful operators may have "JM OOM" 
or "RPC message exceeding maximum frame size" problem with this change.
+If you encounter such issues please manually set the configuration back to 1K.
+
+### PyFlink
+#### Throw exceptions for the unsupported data types 
([FLINK-16606](https://issues.apache.org/jira/browse/FLINK-16606))
+DataTypes can be configured with some parameters, e.g., precision.
+However in previous releases, the precision provided by users was not taking 
any effect and default value for the precision was being used.
+To avoid confusion since Flink 1.11 exceptions will be thrown if the value is 
not supported to make it more visible to users. 
+Changes include:
+- the precision for `TimeType` could only be `0`
+- the length for `VarBinaryType`/`VarCharType` could only be `0x7fffffff`
+- the precision/scale for `DecimalType` could only be `38`/`18`
+- the precision for `TimestampType`/`LocalZonedTimestampType` could only be `3`
+- the resolution for `DayTimeIntervalType` could only be `SECOND` and the 
`fractionalPrecision` could only be `3`
+- the resolution for `YearMonthIntervalType` could only be `MONTH` and the 
`yearPrecision` could only be `2`
+- the `CharType`/`BinaryType`/`ZonedTimestampType` is not supported

Review comment:
       I believe it's this:
   
   >Or does it mean that the only valid precision value for TimeType is 0 and 
if one configures a different value, then it will fail with an exception?
   
   Yes, of course it should be `can` instead of `could`. I will rephrase it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to