This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 49246bc03b6015970724b495cf11959b7c6c665d
Author: Stephan Ewen <[email protected]>
AuthorDate: Mon Mar 15 18:50:24 2021 +0100

    [rebuild site]
---
 content/flink-operations.html         |   4 +-
 content/img/flink_feature_radar.svg   | 298 +++++++++++++++++++++++++++++
 content/img/flink_feature_radar_2.svg |   3 +
 content/roadmap.html                  | 350 +++++++++++++++++++++++-----------
 4 files changed, 540 insertions(+), 115 deletions(-)

diff --git a/content/flink-operations.html b/content/flink-operations.html
index 61744c2..9b9a52a 100644
--- a/content/flink-operations.html
+++ b/content/flink-operations.html
@@ -218,7 +218,7 @@
 
 <p>Machine and process failures are ubiquitous in distributed systems. A 
distributed stream processors like Flink must recover from failures in order to 
be able to run streaming applications 24/7. Obviously, this does not only mean 
to restart an application after a failure but also to ensure that its internal 
state remains consistent, such that the application can continue processing as 
if the failure had never happened.</p>
 
-<p>Flink provides a several features to ensure that applications keep running 
and remain consistent:</p>
+<p>Flink provides several features to ensure that applications keep running 
and remain consistent:</p>
 
 <ul>
   <li><strong>Consistent Checkpoints</strong>: Flink’s recovery mechanism is 
based on consistent checkpoints of an application’s state. In case of a 
failure, the application is restarted and its state is loaded from the latest 
checkpoint. In combination with resettable stream sources, this feature can 
guarantee <em>exactly-once state consistency</em>.</li>
@@ -230,7 +230,7 @@
 
 <h2 id="update-migrate-suspend-and-resume-your-applications">Update, Migrate, 
Suspend, and Resume Your Applications</h2>
 
-<p>Streaming applications that power business-critical services need to be 
maintained. Bugs need to be fixed and improvements or new features need to be 
implemented. However, updating a stateful streaming application is not trivial. 
Often one cannot simply stop the applications and restart an fixed or improved 
version because one cannot afford to lose the state of the application.</p>
+<p>Streaming applications that power business-critical services need to be 
maintained. Bugs need to be fixed and improvements or new features need to be 
implemented. However, updating a stateful streaming application is not trivial. 
Often one cannot simply stop the applications and restart a fixed or improved 
version because one cannot afford to lose the state of the application.</p>
 
 <p>Flink’s <em>Savepoints</em> are a unique and powerful feature that solves 
the issue of updating stateful applications and many other related challenges. 
A savepoint is a consistent snapshot of an application’s state and therefore 
very similar to a checkpoint. However in contrast to checkpoints, savepoints 
need to be manually triggered and are not automatically removed when an 
application is stopped. A savepoint can be used to start a state-compatible 
application and initialize its sta [...]
 
diff --git a/content/img/flink_feature_radar.svg 
b/content/img/flink_feature_radar.svg
new file mode 100644
index 0000000..52ac28b
--- /dev/null
+++ b/content/img/flink_feature_radar.svg
@@ -0,0 +1,298 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Generator: Adobe Illustrator 25.2.0, SVG Export Plug-In . SVG Version: 
6.00 Build 0)  -->
+<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg"; 
xmlns:xlink="http://www.w3.org/1999/xlink"; x="0px" y="0px"
+        viewBox="0 0 1462 1611" style="enable-background:new 0 0 1462 1611;" 
xml:space="preserve">
+<style type="text/css">
+       .st0{opacity:0.5;fill:#F2F2F2;enable-background:new    ;}
+       .st1{fill:none;}
+       .st2{fill:#363636;}
+       .st3{font-family:'Trenda-Bold';}
+       .st4{font-size:31px;}
+       
.st5{opacity:0.5;fill:none;stroke:#363636;stroke-miterlimit:10;stroke-dasharray:3,3;enable-background:new
    ;}
+       .st6{fill:#E8436A;}
+       .st7{font-size:26px;}
+       .st8{fill:#4B9654;}
+       .st9{fill:#2F8DC1;}
+       .st10{fill:#F9A11B;}
+       .st11{fill:#993940;}
+       .st12{fill:#7C56A4;}
+       .st13{fill:#002FA5;}
+       .st14{font-size:23px;}
+       .st15{enable-background:new    ;}
+       .st16{font-size:22.4692px;}
+       .st17{font-size:24px;}
+       .st18{font-size:25px;}
+       .st19{font-size:25.0008px;}
+       .st20{fill:none;stroke:#363636;stroke-width:9;stroke-miterlimit:10;}
+       .st21{fill:#B5739D;stroke:#363636;stroke-width:9;stroke-miterlimit:10;}
+       .st22{font-size:41px;}
+       .st23{font-family:'ArialMT';}
+       .st24{font-size:10px;}
+</style>
+<g>
+       <rect x="0.5" y="1510.5" pointer-events="all" class="st0" width="1460" 
height="100"/>
+       <rect x="1" y="880.3" pointer-events="all" class="st0" width="1460" 
height="598.3"/>
+       <rect x="0.5" y="0.5" pointer-events="all" class="st0" width="1460" 
height="845.4"/>
+       <rect x="30.5" y="630.5" pointer-events="all" class="st1" width="40" 
height="20"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 15.3501 651.6816)" class="st2 
st3 st4">MVP</text>
+       </g>
+       <path pointer-events="stroke" class="st5" d="M781.8,799.7l-555-609.2"/>
+       <rect x="70.5" y="240.5" pointer-events="all" class="st1" width="70" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 18.0127 136)" class="st2 st3 
st4">Beta</text>
+       </g>
+       <rect x="288" y="135.5" pointer-events="all" class="st1" width="230" 
height="90"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 230.7544 136)" class="st2 st3 
st4">Production Ready...</text>
+       </g>
+       <rect x="966.5" y="76.5" pointer-events="all" class="st1" width="230" 
height="90"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 862 127)" class="st2 st3 
st4">Stable</text>
+       </g>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1231.0005 1033.2699)" 
class="st2 st3 st4">Deprecated</text>
+       </g>
+       <path pointer-events="stroke" class="st5" d="M817,809.5l-0.5-683"/>
+       <path pointer-events="stroke" class="st5" d="M666,1402.8l506-370"/>
+       <rect x="651" y="972.8" pointer-events="all" class="st1" width="400" 
height="50"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 176.5005 1018.3096)" class="st2 
st3 st4">Approaching End-of-Life</text>
+       </g>
+       <rect x="70.5" y="1540.5" pointer-events="all" class="st1" width="305" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 72.4224 1575.1875)" class="st6 
st3 st7">APIs</text>
+       </g>
+       <rect x="168" y="1540.5" pointer-events="all" class="st1" width="120" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1292.4648 1575.1871)" 
class="st8 st3 st7">Languages</text>
+       </g>
+       <rect x="325.5" y="1540.5" pointer-events="all" class="st1" width="90" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 522.017 1575.187)" class="st9 
st3 st7">Clients</text>
+       </g>
+       <rect x="710.5" y="1540.5" pointer-events="all" class="st1" width="140" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 676.0762 1575.187)" class="st10 
st3 st7">Connectors</text>
+       </g>
+       <rect x="870.5" y="1540.5" pointer-events="all" class="st1" width="180" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 871.0003 1575.187)" class="st11 
st3 st7">State Backends</text>
+       </g>
+       <rect x="1085.5" y="1540.5" pointer-events="all" class="st1" 
width="100" height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1121.5 1575.4209)" class="st12 
st3 st7">Libraries</text>
+       </g>
+       <rect x="450.5" y="1540.5" pointer-events="all" class="st1" width="220" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 188.5 1575.1871)" class="st13 
st3 st7">Resource Managers</text>
+       </g>
+       <rect x="856.5" y="160.5" pointer-events="all" class="st1" width="305" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 858.5 187.5)" class="st6 st3 
st14">DataStream (streaming)</text>
+       </g>
+       <rect x="70.5" y="290.5" pointer-events="all" class="st1" width="210" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 15.0259 322)" class="st6 st3 
st14">DataStream (batch)</text>
+       </g>
+       <rect x="643.5" y="1072.8" pointer-events="all" class="st1" width="100" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 551.5 1083.3936)" class="st6 
st3 st14">DataSet</text>
+       </g>
+       <rect x="510.5" y="206.5" pointer-events="all" class="st1" width="270" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 580.2861 214.5781)" class="st6 
st3 st14">SQL &amp; Table API</text>
+       </g>
+       <rect x="851" y="1312.8" pointer-events="all" class="st1" width="270" 
height="70"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(0.8713 0.5013 -0.5066 0.8622 960.9474 
1300.7898)" class="st15"><tspan x="0" y="0" class="st6 st3 st16">Legacy 
SQL</tspan><tspan x="0" y="27" class="st6 st3 st16">Query Engine</tspan></text>
+       </g>
+       <rect x="787" y="1087.8" pointer-events="all" class="st1" width="180" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(0.9824 0.1869 -0.1869 0.9824 791.3521 
1093.2705)" class="st6 st3 st14">Queryable State</text>
+       </g>
+       <rect x="90.5" y="540.5" pointer-events="all" class="st1" width="210" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 106 544.8164)" class="st6 st3 
st14">State Processor API</text>
+       </g>
+       <rect x="541" y="1182.8" pointer-events="all" class="st1" width="130" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 371 1192.6338)" class="st9 st3 
st17">Scala Shell</text>
+       </g>
+       <rect x="1170.5" y="166.5" pointer-events="all" class="st1" width="110" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1171 237.5)" class="st8 st3 
st17">Java 8</text>
+       </g>
+       <rect x="1170.5" y="206.5" pointer-events="all" class="st1" width="120" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1172 281)" class="st8 st3 
st17">Java 11</text>
+       </g>
+       <rect x="1171.5" y="261.5" pointer-events="all" class="st1" width="140" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1171 333.1797)" class="st8 st3 
st17">Scala 2.12</text>
+       </g>
+       <rect x="476" y="1047.8" pointer-events="all" class="st1" width="140" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 306 1083.3945)" class="st8 st3 
st17">Scala 2.11</text>
+       </g>
+       <rect x="633" y="150.5" pointer-events="all" class="st1" width="135" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 341.6924 211)" class="st13 st3 
st18">Kubernetes</text>
+       </g>
+       <rect x="856.5" y="210.5" pointer-events="all" class="st1" width="130" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 858.5 237.5)" class="st13 st3 
st18">Standalone</text>
+       </g>
+       <rect x="856.5" y="256.5" pointer-events="all" class="st1" width="135" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 858.5 283.5)" class="st13 st3 
st18">Yarn</text>
+       </g>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1171 377.5)" class="st13 st3 
st18">Zookeeper HA</text>
+       </g>
+       <rect x="1062" y="1192.8" pointer-events="all" class="st1" width="135" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(0.92 0.392 -0.392 0.92 1093.6067 
1194.7485)" class="st13 st3 st19">Mesos</text>
+       </g>
+       <rect x="380.5" y="640.5" pointer-events="all" class="st1" width="220" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 371 659.9396)" 
class="st15"><tspan x="0" y="0" class="st13 st3 st18">Kubernetes-based 
HA</tspan><tspan x="0" y="30" class="st13 st3 
st18">(ZK-alternative)</tspan></text>
+       </g>
+       <rect x="856.5" y="301.5" pointer-events="all" class="st1" width="220" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 858.5 328.5)" class="st11 st3 
st17">Heap/FS State Back.</text>
+       </g>
+       <rect x="850.5" y="350.5" pointer-events="all" class="st1" width="305" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 858.75 381.1953)" class="st11 
st3 st17">RocksDB/FS State Back.</text>
+       </g>
+       <rect x="696" y="1222.8" pointer-events="all" class="st1" width="60" 
height="50"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(0.9409 0.3387 -0.3387 0.9409 712.0695 
1219.0122)" class="st12 st3 st17">Gelly</text>
+       </g>
+       <rect x="1044" y="240.5" pointer-events="all" class="st1" width="70" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1171 189.5)" class="st12 st3 
st17">CEP</text>
+       </g>
+       <rect x="50.5" y="680.5" pointer-events="all" class="st1" width="190" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 17.7056 708.66)" class="st12 
st3 st17">Machine Learning </text>
+               <text transform="matrix(1 0 0 1 17.7056 737.46)" class="st12 
st3 st17">Library</text>
+       </g>
+       <rect x="480.5" y="341.5" pointer-events="all" class="st1" width="135" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 430.1448 351)" class="st10 st3 
st17">JDBC Sink</text>
+       </g>
+       <rect x="190.5" y="470.5" pointer-events="all" class="st1" width="230" 
height="70"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 83.8286 500.9998)" class="st10 
st3 st17">Unified Source API. [w/ Kafka, File]</text>
+       </g>
+       <rect x="871.5" y="460.5" pointer-events="all" class="st1" 
width="187.5" height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 857 488.0273)" class="st10 st3 
st17">File Source &amp; Sink</text>
+       </g>
+       <rect x="1081.5" y="460.5" pointer-events="all" class="st1" width="305" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1156 488.5547)" class="st10 st3 
st17">Kafka Source &amp; Sink</text>
+       </g>
+       <rect x="585.5" y="275.5" pointer-events="all" class="st1" width="160" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 586 285.3643)" class="st10 st3 
st17">Pulsar Source &amp; Sink</text>
+       </g>
+       <rect x="580.5" y="500.5" pointer-events="all" class="st1" width="240" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 580.9999 531)" class="st10 st3 
st17">Rabbit MQ Source</text>
+       </g>
+       <rect x="525.5" y="400.5" pointer-events="all" class="st1" width="240" 
height="30"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 502.75 416)" class="st10 st3 
st17">Kinesis Source &amp; Sink</text>
+       </g>
+       <rect x="871.5" y="510.5" pointer-events="all" class="st1" width="190" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 856.9995 541)" class="st10 st3 
st17">PubSub Source</text>
+       </g>
+       <rect x="190.5" y="590.5" pointer-events="all" class="st1" width="190" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 201 584.6249)" class="st10 st3 
st17">NiFi Source</text>
+       </g>
+       <rect x="1081.5" y="510.5" pointer-events="all" class="st1" width="220" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1158.04 537.5)" class="st10 st3 
st17">Elastic Search Sink</text>
+       </g>
+       <rect x="1081.5" y="560.5" pointer-events="all" class="st1" width="220" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1160.0801 587.7158)" 
class="st10 st3 st17">Cassandra Sink</text>
+       </g>
+       <rect x="861.5" y="560.5" pointer-events="all" class="st1" width="220" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 857.5996 587.9316)" class="st10 
st3 st17">HBase Sink</text>
+       </g>
+       <rect x="633" y="330.5" pointer-events="all" class="st1" width="135" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 616.5 351.2158)" class="st10 
st3 st17">Hive Catalog</text>
+       </g>
+       <rect x="390.5" y="251.5" pointer-events="all" class="st1" width="160" 
height="60"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 381 281)" class="st15"><tspan 
x="0" y="0" class="st10 st3 st17">Hive SQL.</tspan><tspan x="0" y="28.8" 
class="st10 st3 st17">Source &amp; Sink</tspan></text>
+       </g>
+       <rect x="200.5" y="390.5" pointer-events="all" class="st1" width="190" 
height="70"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 43.0991 449.1187)" class="st10 
st3 st17">Unified Sink API [w/ FileSink]</text>
+       </g>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 16.9805 270.6396)" class="st8 
st3 st17">Python Table API</text>
+       </g>
+       <rect x="248" y="730.5" pointer-events="all" class="st1" width="250" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 281.5 791)" class="st8 st3 
st17">Python DataStream API</text>
+       </g>
+       <rect x="1201.5" y="610.5" pointer-events="all" class="st1" width="180" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 857 695)" class="st10 st3 
st17">S3 FileSystem</text>
+       </g>
+       <rect x="1075.5" y="674.5" pointer-events="all" class="st1" width="180" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1161.1123 684.5889)" 
class="st10 st3 st17">GCS FileSystem</text>
+       </g>
+       <rect x="856.5" y="610.5" pointer-events="all" class="st1" width="180" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 858.5 637.5)" class="st10 st3 
st17">Local/NFS FileSystem</text>
+       </g>
+       <rect x="1006.5" y="610.5" pointer-events="all" class="st1" width="180" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 1160.0801 629.8887)" 
class="st10 st3 st17">HDFS FileSystem</text>
+       </g>
+       <rect x="660.5" y="640.5" pointer-events="all" class="st1" width="130" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 676.0044 642.1797)" 
class="st15"><tspan x="0" y="0" class="st10 st3 st17">Azure Blob</tspan><tspan 
x="0" y="28.8" class="st10 st3 st17">FileSystem</tspan></text>
+       </g>
+       <rect x="633" y="555.5" pointer-events="all" class="st1" width="180" 
height="50"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 633.5 570.5801)" 
class="st15"><tspan x="0" y="0" class="st10 st3 st17">AliCloud OSS 
</tspan><tspan x="0" y="28.8" class="st10 st3 st17">FileSystem</tspan></text>
+       </g>
+       <path pointer-events="stroke" class="st20" 
d="M188,790.5c135-260,523.6-390,1165.7-390"/>
+       <path pointer-events="all" class="st21" 
d="M1360.4,400.5l-9,4.5l2.2-4.5l-2.2-4.5L1360.4,400.5z"/>
+       <path pointer-events="stroke" class="st20" 
d="M176,1122.8c543.3-13.3,911.4,95.7,1104.2,327.1"/>
+       <path pointer-events="all" class="st21" 
d="M1284.6,1455l-9.2-4l4.9-1.2l2-4.6L1284.6,1455z"/>
+       <rect x="445.5" y="580.5" pointer-events="all" class="st1" width="90" 
height="40"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 446 611)" class="st9 st3 
st17">SQL CLI</text>
+       </g>
+       <path pointer-events="stroke" class="st5" d="M711.6,816.7L20.5,580.5"/>
+       <rect x="10.5" y="0.5" pointer-events="all" class="st1" width="540" 
height="90"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 11.5 46)" class="st2 st3 
st22">New- and Stable Features</text>
+       </g>
+       <rect x="11" y="932.8" pointer-events="all" class="st1" width="540" 
height="90"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 17.7056 926.8354)" class="st2 
st3 st22">Features Phasing Out</text>
+       </g>
+       <rect x="26.5" y="340.5" pointer-events="all" class="st1" width="274" 
height="60"/>
+       <g transform="translate(-0.5 -0.5)">
+               <text transform="matrix(1 0 0 1 16.9805 371)" 
class="st15"><tspan x="0" y="0" class="st10 st3 st17">Change-Data-Capture API 
and </tspan><tspan x="0" y="28.8" class="st10 st3 
st17">connectors</tspan></text>
+       </g>
+</g>
+<a xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems";  
transform="translate(0,-5)">
+       <text transform="matrix(1 0 0 1 649.6006 1611.5)" class="st23 
st24">Viewer does not support full SVG 1.1</text>
+</a>
+</svg>
diff --git a/content/img/flink_feature_radar_2.svg 
b/content/img/flink_feature_radar_2.svg
new file mode 100644
index 0000000..39c6d2b
--- /dev/null
+++ b/content/img/flink_feature_radar_2.svg
@@ -0,0 +1,3 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" 
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd";>
+<svg xmlns="http://www.w3.org/2000/svg"; 
xmlns:xlink="http://www.w3.org/1999/xlink"; version="1.1" width="1462px" 
height="1611px" viewBox="-0.5 -0.5 1462 1611" content="&lt;mxfile 
host=&quot;app.diagrams.net&quot; modified=&quot;2021-03-02T12:54:02.361Z&quot; 
agent=&quot;5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like 
Gecko) Chrome/88.0.4324.190 Safari/537.36&quot; 
etag=&quot;HUCSSSwWrlf5aUy_RP5Y&quot; version=&quot;14.4.3&quot; 
type=&quot;device&quot;&gt;&lt;diagram id=& [...]
\ No newline at end of file
diff --git a/content/roadmap.html b/content/roadmap.html
index edf47c4..12f2a59 100644
--- a/content/roadmap.html
+++ b/content/roadmap.html
@@ -219,194 +219,318 @@ under the License.
 
 <div class="page-toc">
 <ul id="markdown-toc">
-  <li><a 
href="#analytics-applications-and-the-roles-of-datastream-dataset-and-table-api"
 
id="markdown-toc-analytics-applications-and-the-roles-of-datastream-dataset-and-table-api">Analytics,
 Applications, and the roles of DataStream, DataSet, and Table API</a></li>
-  <li><a href="#batch-and-streaming-unification" 
id="markdown-toc-batch-and-streaming-unification">Batch and Streaming 
Unification</a></li>
-  <li><a href="#fast-batch-bounded-streams" 
id="markdown-toc-fast-batch-bounded-streams">Fast Batch (Bounded 
Streams)</a></li>
-  <li><a href="#stream-processing-use-cases" 
id="markdown-toc-stream-processing-use-cases">Stream Processing Use 
Cases</a></li>
-  <li><a href="#deployment-scaling-and-security" 
id="markdown-toc-deployment-scaling-and-security">Deployment, Scaling and 
Security</a></li>
-  <li><a href="#resource-management-and-configuration" 
id="markdown-toc-resource-management-and-configuration">Resource Management and 
Configuration</a></li>
-  <li><a href="#ecosystem" id="markdown-toc-ecosystem">Ecosystem</a></li>
-  <li><a href="#non-jvm-languages-python" 
id="markdown-toc-non-jvm-languages-python">Non-JVM Languages (Python)</a></li>
-  <li><a href="#connectors-and-formats" 
id="markdown-toc-connectors-and-formats">Connectors and Formats</a></li>
-  <li><a href="#miscellaneous" 
id="markdown-toc-miscellaneous">Miscellaneous</a></li>
+  <li><a href="#feature-radar" id="markdown-toc-feature-radar">Feature 
Radar</a>    <ul>
+      <li><a href="#feature-stages" id="markdown-toc-feature-stages">Feature 
Stages</a></li>
+    </ul>
+  </li>
+  <li><a 
href="#unified-analytics-where-batch-and-streaming-come-together-sql-and-beyond"
 
id="markdown-toc-unified-analytics-where-batch-and-streaming-come-together-sql-and-beyond">Unified
 Analytics: Where Batch and Streaming come Together; SQL and beyond.</a>    <ul>
+      <li><a href="#a-unified-sql-platform" 
id="markdown-toc-a-unified-sql-platform">A unified SQL Platform</a></li>
+      <li><a href="#deep-batch--streaming-unification-for-the-datastream-api" 
id="markdown-toc-deep-batch--streaming-unification-for-the-datastream-api">Deep 
Batch / Streaming Unification for the DataStream API</a></li>
+      <li><a href="#subsuming-dataset-with-datastream-and-table-api" 
id="markdown-toc-subsuming-dataset-with-datastream-and-table-api">Subsuming 
DataSet with DataStream and Table API</a></li>
+    </ul>
+  </li>
+  <li><a href="#applications-vs-clusters-flink-as-a-library" 
id="markdown-toc-applications-vs-clusters-flink-as-a-library">Applications vs. 
Clusters; “Flink as a Library”</a></li>
+  <li><a href="#performance" id="markdown-toc-performance">Performance</a>    
<ul>
+      <li><a href="#faster-checkpoints-and-recovery" 
id="markdown-toc-faster-checkpoints-and-recovery">Faster Checkpoints and 
Recovery</a></li>
+      <li><a href="#large-scale-batch-applications" 
id="markdown-toc-large-scale-batch-applications">Large Scale Batch 
Applications</a></li>
+    </ul>
+  </li>
+  <li><a href="#python-apis" id="markdown-toc-python-apis">Python APIs</a></li>
+  <li><a href="#documentation" 
id="markdown-toc-documentation">Documentation</a></li>
+  <li><a href="#miscellaneous-operational-tools" 
id="markdown-toc-miscellaneous-operational-tools">Miscellaneous Operational 
Tools</a></li>
+  <li><a href="#stateful-functions" 
id="markdown-toc-stateful-functions">Stateful Functions</a></li>
 </ul>
 
 </div>
 
-<p><strong>Preamble:</strong> This is not an authoritative roadmap in the 
sense of a strict plan with a specific
-timeline. Rather, we — the community — share our vision for the future and 
give an overview of the bigger
-initiatives that are going on and are receiving attention. This roadmap shall 
give users and
-contributors an understanding where the project is going and what they can 
expect to come.</p>
+<p><strong>Preamble:</strong> This roadmap means to provide user and 
contributors with a high-level summary of ongoing efforts,
+grouped by the major threads to which the efforts belong. With so much that is 
happening in Flink, we
+hope that this helps with understanding the direction of the project.
+The roadmap contains both efforts in early stages as well as nearly completed
+efforts, so that users may get a better impression of the overall status and 
direction of those developments.</p>
+
+<p>More details and various smaller changes can be found in the
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals";>FLIPs</a></p>
 
 <p>The roadmap is continuously updated. New features and efforts should be 
added to the roadmap once
 there is consensus that they will happen and what they will roughly look like 
for the user.</p>
 
-<p><strong>Last Update:</strong> 2019-09-04</p>
+<p><strong>Last Update:</strong> 2021-03-01</p>
+
+<hr />
 
-<h1 
id="analytics-applications-and-the-roles-of-datastream-dataset-and-table-api">Analytics,
 Applications, and the roles of DataStream, DataSet, and Table API</h1>
+<h1 id="feature-radar">Feature Radar</h1>
 
-<p>Flink views stream processing as a <a 
href="/flink-architecture.html">unifying paradigm for data processing</a>
-(batch and real-time) and event-driven applications. The APIs are evolving to 
reflect that view:</p>
+<p>The feature radar is meant to give users guidance regarding feature 
maturity, as well as which features
+are approaching end-of-life. For questions, please contact the developer 
mailing list:
+<a 
href="&#109;&#097;&#105;&#108;&#116;&#111;:&#100;&#101;&#118;&#064;&#102;&#108;&#105;&#110;&#107;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;">&#100;&#101;&#118;&#064;&#102;&#108;&#105;&#110;&#107;&#046;&#097;&#112;&#097;&#099;&#104;&#101;&#046;&#111;&#114;&#103;</a></p>
+
+<div class="row front-graphic">
+  <img src="/img/flink_feature_radar_2.svg" width="700px" />
+</div>
+
+<h2 id="feature-stages">Feature Stages</h2>
 
 <ul>
-  <li>
-    <p>The <strong>Table API / SQL</strong> is becoming the primary API for 
analytical use cases, in a unified way
-across batch and streaming. To support analytical use cases in a more 
streamlined fashion,
-the API is being extended with more convenient multi-row/column operations (<a 
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739";>FLIP-29</a>).</p>
+  <li><strong>MVP:</strong> Have a look, consider whether this can help you in 
the future.</li>
+  <li><strong>Beta:</strong> You can benefit from this, but you should 
carefully evaluate the feature.</li>
+  <li><strong>Ready and Evolving:</strong> Ready to use in production, but be 
aware you may need to make some adjustments to your application and setup in 
the future, when you upgrade Flink.</li>
+  <li><strong>Stable:</strong> Unrestricted use in production</li>
+  <li><strong>Reaching End-of-Life:</strong> Stable, still feel free to use, 
but think about alternatives. Not a good match for new long-lived projects.</li>
+  <li><strong>Deprecated:</strong> Start looking for alternatives now</li>
+</ul>
+
+<hr />
+
+<h1 
id="unified-analytics-where-batch-and-streaming-come-together-sql-and-beyond">Unified
 Analytics: Where Batch and Streaming come Together; SQL and beyond.</h1>
+
+<p>Flink is a streaming data system in its core, that executes “batch as a 
special case of streaming”.
+Efficient execution of batch jobs is powerful in its own right; but even more 
so, batch processing
+capabilities (efficient processing of bounded streams) open the way for a 
seamless unification of
+batch and streaming applications.</p>
+
+<p>Unified streaming/batch up-levels the streaming data paradigm: It gives 
users consistent semantics across
+their real-time and lag-time applications. Furthermore, streaming applications 
often need to be complemented
+by batch (bounded stream) processing, for example when reprocessing data after 
bugs or data quality issues,
+or when bootstrapping new applications. A unified API and system make this 
much easier.</p>
+
+<h2 id="a-unified-sql-platform">A unified SQL Platform</h2>
+
+<p>The community has been building Flink to a powerful basis for a unified 
(batch and streaming) SQL analytics
+platform, and is continuing to do so.</p>
 
+<p>SQL has very strong cross-batch-streaming semantics, allowing users to use 
the same queries for ad-hoc analytics
+and as continuous queries. Flink already contains an efficient unified query 
engine, and a wide set of
+integrations. With user feedback, those are continuously improved.</p>
+
+<p><strong>More Connector and Change Data Capture Support</strong></p>
+
+<ul>
+  <li>Change-Data-Capture: Capturing a stream of data changes, directly from 
databases, by attaching to the
+transaction log. The community is adding more CDC intrgrations.
     <ul>
-      <li>
-        <p>Like SQL, the Table API is <em>declarative</em>, operates on a 
<em>logical schema</em>, and applies <em>automatic optimization</em>.
-Because of these properties, that API does not give direct access to time and 
state.</p>
-      </li>
-      <li>
-        <p>The Table API is also the foundation for the Machine Learning (ML) 
efforts inititated in (<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-39+Flink+ML+pipeline+and+ML+libs";>FLIP-39</a>),
 that will allow users to easily build, persist and serve (<a 
href="https://issues.apache.org/jira/browse/FLINK-13167";>FLINK-13167</a>) ML 
pipelines/workflows through a set of abstract core interfaces.</p>
-      </li>
+      <li>External CDC connectors: <a 
href="https://flink-packages.org/packages/cdc-connectors";>https://flink-packages.org/packages/cdc-connectors</a></li>
+      <li>Background: <a 
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=147427289";>FLIP-105</a>
+(CDC support for SQL) and <a href="https://debezium.io/";>Debezium</a>.</li>
     </ul>
   </li>
-  <li>
-    <p>The <strong>DataStream API</strong> is the primary API for data-driven 
applications and data pipelines.
-It uses <em>physical data types</em> (Java/Scala classes) and there is no 
automatic rewriting.
-The applications have explicit control over <em>time</em> and <em>state</em> 
(state, triggers, proc fun.). 
-In the long run, the DataStream API will fully subsume the DataSet API through 
<em>bounded streams</em>.</p>
+  <li>Data Lake Connectors: Unified streaming &amp; batch is a powerful value 
proposition for Data Lakes: supporting
+same APIs, semantics, and engine for streaming real-time processing and batch 
processing of historic data.
+The community is adding deeper integrations with various Data Lake systems:
+    <ul>
+      <li><a href="https://iceberg.apache.org/";>Apache Iceberg</a>: <a 
href="https://iceberg.apache.org/flink/";>https://iceberg.apache.org/flink/</a></li>
+      <li><a href="https://hudi.apache.org/";>Apache Hudi</a>: <a 
href="https://hudi.apache.org/blog/apache-hudi-meets-apache-flink/";>https://hudi.apache.org/blog/apache-hudi-meets-apache-flink/</a></li>
+      <li><a href="https://pinot.apache.org/";>Apache Pinot</a>: <a 
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177045634";>FLIP-166</a></li>
+    </ul>
   </li>
 </ul>
 
-<h1 id="batch-and-streaming-unification">Batch and Streaming Unification</h1>
+<p><strong>Platform Infrastructure</strong></p>
 
-<p>Flink’s approach is to cover batch and streaming by the same APIs on a 
streaming runtime.
-<a href="/news/2019/02/13/unified-batch-streaming-blink.html">This blog 
post</a>
-gives an introduction to the unification effort.</p>
+<ul>
+  <li>To simplify the building of production SQL platforms with Flink, we are 
improving the SQL client and are
+working on SQL gateway components that interface between client and cluster: 
<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-163%3A+SQL+Client+Improvements";>FLIP-163</a></li>
+</ul>
 
-<p>The biggest user-facing parts currently ongoing are:</p>
+<p><strong>Support for Common Languages, Formats, Catalogs</strong></p>
+
+<ul>
+  <li>Hive Query Compatibility: <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-152%3A+Hive+Query+Syntax+Compatibility";>FLIP-152</a></li>
+</ul>
+
+<p>Flink has a broad SQL coverage for batch (full TPC-DS support) and a 
state-of-the-art set of supported
+operations in streaming. There is continuous effort to add more functions and 
cover more SQL operations.</p>
+
+<h2 id="deep-batch--streaming-unification-for-the-datastream-api">Deep Batch / 
Streaming Unification for the DataStream API</h2>
+
+<p>The <em>DataStream API</em> is Flink’s <em>physical</em> API, for use cases 
where users need very explicit control over data
+types, streams, state, and time. This API is evolving to support efficient 
batch execution on bounded data.</p>
+
+<p>DataStream API executes the same dataflow shape in batch as in streaming, 
keeping the same operators.
+That way users keep the same level of control over the dataflow, and our goal 
is to mix and switch between
+batch/streaming execution in the future to make it a seamless experience.</p>
+
+<p><strong>Unified Sources and Sinks</strong></p>
 
 <ul>
   <li>
-    <p>Table API restructuring (<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions";>FLIP-32</a>)
-that decouples the Table API from batch/streaming specific environments and 
dependencies. Some key parts of the FLIP are completed, such as the modular 
decoupling of expression parsing and the removal of Scala dependencies, and the 
next step is to unify the function stack (<a 
href="https://issues.apache.org/jira/browse/FLINK-12710";>FLINK-12710</a>).</p>
-  </li>
-  <li>
-    <p>The new source interfaces generalize across batch and streaming, making 
every connector usable as a batch and streaming data source (<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface";>FLIP-27</a>).</p>
+    <p>The first APIs and implementations of sources were specific to either 
streaming programs in the DataStream API
+(<a 
href="https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SourceFunction.java";>SourceFunction</a>),
+or to batch programs in the DataSet API (<a 
href="https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/io/InputFormat.java";>InputFormat</a>).</p>
+
+    <p>In this effort, we are creating sources that work across batch and 
streaming execution. The aim is to give
+users a consistent experience across both modes, and to allow them to easily 
switch between streaming and batch
+execution for their unbounded and bounded streaming applications.
+The interface for this New Source API is done and available, and we are 
working on migrating more source connectors
+to this new model, see <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface";>FLIP-27</a>.</p>
   </li>
   <li>
-    <p>The introduction of <em>upsert-</em> or <em>changelog-</em> sources 
will support more powerful streaming inputs to the Table API (<a 
href="https://issues.apache.org/jira/browse/FLINK-8545";>FLINK-8545</a>).</p>
+    <p>Similar to the sources, the sinks original sink APIs are also specific 
to streaming
+(<a 
href="https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/SinkFunction.java";>SinkFunction</a>)
+and batch (<a 
href="https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/io/OutputFormat.java";>OutputFormat</a>)
+APIs and execution.</p>
+
+    <p>We have introduced a new API for sinks that consistently handles result 
writing and committing (<em>Transactions</em>)
+across batch and streaming. The first iteration of the API exists, and we are 
porting sinks and refining the
+API in the process. See <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-143%3A+Unified+Sink+API";>FLIP-143</a>.</p>
   </li>
 </ul>
 
-<p>On the runtime level, the streaming operators were extended in Flink 1.9 to 
also support the data consumption patterns required for some batch operations — 
which is groundwork for upcoming features like efficient <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-17+Side+Inputs+for+DataStream+API";>side
 inputs</a>.</p>
-
-<p>Once these unification efforts are completed, we can move on to unifying 
the DataStream API.</p>
-
-<h1 id="fast-batch-bounded-streams">Fast Batch (Bounded Streams)</h1>
-
-<p>The community’s goal is to make Flink’s performance on bounded streams 
(batch use cases) competitive with that
-of dedicated batch processors. While Flink has been shown to handle some batch 
processing use cases faster than
-widely-used batch processors, there are some ongoing efforts to make sure this 
the case for broader use cases:</p>
+<p><strong>DataStream Batch Execution</strong></p>
 
 <ul>
   <li>
-    <p>Faster and more complete SQL/Table API: The community is merging the 
Blink query processor which improves on
-the current query processor by adding a much richer set of runtime operators, 
optimizer rules, and code generation.
-The Blink-based query processor has full TPC-H support (with TPC-DS planned 
for the next release) and up to 10x performance improvement over the pre-1.9 
Flink query processor (<a 
href="https://issues.apache.org/jira/browse/FLINK-11439";>FLINK-11439</a>).</p>
+    <p>Flink is adding a <em>batch execution mode</em> for bounded DataStream 
programs. This gives users faster and simpler
+execution and recovery of their bounded streaming applications; users do not 
need to worry about watermarks and
+state sizes in this execution mode: <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-140%3A+Introduce+batch-style+execution+for+bounded+keyed+streams";>FLIP-140</a></p>
+
+    <p>The core batch execution mode is implemented with <a 
href="https://flink.apache.org/news/2020/12/10/release-1.12.0.html#batch-execution-mode-in-the-datastream-api";>great
 results</a>;
+there are ongoing improvements around aspects like broadcast state and 
processing-time-timers.
+This mode requires the new unified sources and sinks that are mentioned above, 
so it is limited
+to the connectors that have been ported to those new APIs.</p>
   </li>
+</ul>
+
+<p><strong>Mixing bounded/unbounded streams, and batch/streaming 
execution</strong></p>
+
+<ul>
   <li>
-    <p>An application on bounded data can schedule operations after another, 
depending on how the operators
-consume data (e.g., first build hash table, then probe hash table).
-We are separating the scheduling strategy from the ExecutionGraph to support 
different strategies
-on bounded data (<a 
href="https://issues.apache.org/jira/browse/FLINK-10429";>FLINK-10429</a>).</p>
+    <p>Support checkpointing when some tasks finished &amp; Bounded stream 
programs shut down with a final
+checkpoint: <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-147%3A+Support+Checkpoints+After+Tasks+Finished";>FLIP-147</a></p>
   </li>
   <li>
-    <p>Caching of intermediate results on bounded data, to support use cases 
like interactive data exploration.
-The caching generally helps with applications where the client submits a 
series of jobs that build on
-top of one another and reuse each others’ results (<a 
href="https://issues.apache.org/jira/browse/FLINK-11199";>FLINK-11199</a>).</p>
+    <p>There are initial discussions and designs about jobs with mixed 
batch/streaming execution, so stay tuned for more
+news in that area.</p>
   </li>
 </ul>
 
-<p>Various of these enhancements can be integrated from the contributed code 
in the <a href="https://github.com/apache/flink/tree/blink";>Blink fork</a>. To 
exploit these optimizations for bounded streams also in the DataStream API, we 
first need to break parts of the API and explicitly model bounded streams.</p>
+<h2 id="subsuming-dataset-with-datastream-and-table-api">Subsuming DataSet 
with DataStream and Table API</h2>
 
-<h1 id="stream-processing-use-cases">Stream Processing Use Cases</h1>
+<p>We want to eventually drop the legacy Batch-only DataSet API, have 
batch-and stream processing unified
+throughout the entire system.</p>
 
-<p>The <em>new source interface</em> effort (<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface";>FLIP-27</a>)
-aims to give simpler out-of-the box support for event time and watermark 
generation for sources.
-Sources will have the option to align their consumption speed in event time, 
to reduce the
-size of in-flight state when re-processing large data volumes in streaming
-(<a 
href="https://issues.apache.org/jira/browse/FLINK-10886";>FLINK-10887</a>).</p>
+<p>Overall Discussion: <a 
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866741";>FLIP-131</a></p>
 
-<p>To overcome the current pitfalls of checkpoint performance under 
backpressure scenarios, the community is introducing the concept of <a 
href="https://lists.apache.org/thread.html/fd5b6cceb4bffb635e26e7ec0787a8db454ddd64aadb40a0d08a90a8@%3Cdev.flink.apache.org%3E";>unaligned
 checkpoints</a>. This will allow checkpoint barriers to overtake the 
output/input buffer queue to speed up alignment and snapshot the inflight data 
as part of checkpoint state.</p>
+<p>The <em>DataStream API</em> supports batch-execution to efficiently execute 
streaming programs on historic data
+(see above). Takes over that set of use cases.</p>
 
-<p>We also plan to add first class support for
-<a href="https://developers.google.com/protocol-buffers/";>Protocol Buffers</a> 
to make evolution of streaming state simpler, similar to the way
-Flink deeply supports Avro state evolution (<a 
href="https://issues.apache.org/jira/browse/FLINK-11333";>FLINK-11333</a>).</p>
+<p>The <em>Table API</em> should become the default API for batch-only 
applications.</p>
 
-<h1 id="deployment-scaling-and-security">Deployment, Scaling and Security</h1>
+<ul>
+  <li>Add more operations to Table API, so support common data manipulation 
tasks more
+   easily: <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-155%3A+Introduce+a+few+convenient+operations+in+Table+API";>FLIP-155</a></li>
+  <li>Make Source and Sink definitions easier in the Table API.</li>
+</ul>
 
-<p>To provide downstream projects with a consistent way to programatically 
control Flink deployment submissions, the Client API is being <a 
href="https://lists.apache.org/thread.html/ce99cba4a10b9dc40eb729d39910f315ae41d80ec74f09a356c73938@%3Cdev.flink.apache.org%3E";>refactored</a>.
 The goal is to unify the implementation of cluster deployment and job 
submission in Flink and allow more flexible job and cluster management — 
independent of cluster setup or deployment mode. <a href="https:/ [...]
+<p>Improve the <em>interplay between the Table API and the DataStream API</em> 
to allow switching from Table API to
+DataStream API when more control over the data types and operations is 
necessary.</p>
+
+<ul>
+  <li>Interoperability between DataStream and Table APIs: <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-136%3A++Improve+interoperability+between+DataStream+and+Table+API";>FLIP-136</a></li>
+</ul>
+
+<hr />
 
-<p>The community is working on extending the interoperability with 
authentication and authorization services.
-Under discussion are general extensions to the <a 
href="http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-security-improvements-td21068.html";>security
 module abstraction</a>
-as well as specific <a 
href="http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-Kerberos-Improvement-td25983.html";>enhancements
 to the Kerberos support</a>.</p>
+<h1 id="applications-vs-clusters-flink-as-a-library">Applications vs. 
Clusters; “Flink as a Library”</h1>
 
-<h1 id="resource-management-and-configuration">Resource Management and 
Configuration</h1>
+<p>The goal of these efforts is to make it feel natural to deploy (long 
running streaming) Flink applications.
+Instead of starting a cluster and submitting a job to that cluster, these 
efforts support deploying a streaming
+job as a self contained application.</p>
 
-<p>There is a big effort to design a new way for Flink to interact with 
dynamic resource
-pools and automatically adjust to resource availability and load.
-Part of this is becoming a <em>reactive</em> way of adjusting to changing 
resources (like
-containers/pods being started or removed) (<a 
href="https://issues.apache.org/jira/browse/FLINK-10407";>FLINK-10407</a>),
-while other parts are resulting in <em>active</em> scaling policies where 
Flink decides to add
-or remove TaskManagers, based on internal metrics.</p>
+<p>For example as a simple Kubernetes deployment; deployed and scaled like a 
regular application without extra workflows.</p>
+
+<p>Deploy Flink jobs as self-contained Applications works for all deployment 
targets since Flink 1.11.0
+(<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-85+Flink+Application+Mode";>FLIP-85</a>).</p>
 
 <ul>
   <li>
-    <p>The current TaskExecutor memory configuration in Flink has some 
shortcomings that make it hard to reason about or optimize resource 
utilization, such as: (1) different configuration models for memory footprint 
for Streaming and Batch; (2) complex and user-dependent configuration of 
off-heap state backends (typically RocksDB) in Streaming execution; (3) and 
sub-optimal memory utilization in Batch execution. <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unifi [...]
-  </li>
-  <li>
-    <p>In a similar way, we are introducing changes to Flink’s resource 
management module with <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-53%3A+Fine+Grained+Operator+Resource+Management";>FLIP-53</a>
 to enable fine-grained control over Operator resource utilization according to 
known (or unknown) resource profiles. Since the requirements of this FLIP 
conflict with the existing static slot allocation model, this model first needs 
to be refactored to provide dynamic slo [...]
+    <p>Reactive Scaling lets Flink applications change their parallelism in 
response to growing and shrinking
+worker pools, and makes Flink compatibel with standard auto-scalers:
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-159%3A+Reactive+Mode";>FLIP-159</a></p>
   </li>
   <li>
-    <p>To support the active resource management also in Kubernetes, we are 
working on a Kubernetes Resource Manager
-(<a 
href="https://issues.apache.org/jira/browse/FLINK-9953";>FLINK-9953</a>).</p>
+    <p>Kubernetes-based HA-services let Flink applications run on Kubernetes 
without requiring a ZooKeeper dependency:
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-144%3A+Native+Kubernetes+HA+for+Flink";>FLIP-144</a></p>
   </li>
 </ul>
 
-<p>Spillable Heap State Backend (<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-50%3A+Spill-able+Heap+Keyed+State+Backend";>FLIP-50</a>),
 a new state backend configuration, is being implemented to support spilling 
cold state data to disk before heap memory is exhausted and so reduce the 
chance of OOM errors in job execution. This is not meant as a replacement for 
RocksDB, but more of an enhancement to the existing Heap State Backend.</p>
+<hr />
+
+<h1 id="performance">Performance</h1>
 
-<h1 id="ecosystem">Ecosystem</h1>
+<p>Continuous work to keep improving performance and recovery speed.</p>
 
-<p>The community is working on extending the support for catalogs, schema 
registries, and metadata stores, including support in the APIs and the SQL 
client (<a 
href="https://issues.apache.org/jira/browse/FLINK-11275";>FLINK-11275</a>).
-We have added DDL (Data Definition Language) support in Flink 1.9 to make it 
easy to add tables to catalogs (<a 
href="https://issues.apache.org/jira/browse/FLINK-10232";>FLINK-10232</a>), and 
will extend the support to streaming use cases in the next release.</p>
+<h2 id="faster-checkpoints-and-recovery">Faster Checkpoints and Recovery</h2>
 
-<p>There is also an ongoing effort to fully integrate Flink with the Hive 
ecosystem. The latest release made headway in bringing Hive data and metadata 
interoperability to Flink, along with initial support for Hive UDFs. Moving 
forward, the community will stabilize and expand on the existing implementation 
to support Hive DDL syntax and types, as well as other desirable features and 
capabilities described in <a 
href="https://issues.apache.org/jira/browse/FLINK-10556";>FLINK-10556</a>.</p>
+<p>The community is continuously working on improving checkpointing and 
recovery speed.
+Checkpoints and recovery are stable and have been a reliable workhorse for 
years. We are still
+trying to make it faster, more predictable, and to remove some confusions and 
inflexibility in some areas.</p>
 
-<h1 id="non-jvm-languages-python">Non-JVM Languages (Python)</h1>
+<ul>
+  <li>Unaligned Checkpoints, to make checkpoints progress faster when 
applications cause backpressure:
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-76%3A+Unaligned+Checkpoints";>FLIP-76</a>,
 available
+since Flink 1.12.2.</li>
+  <li>Log-based checkpoints, for very frequent incremental checkpointing:
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-158%3A+Generalized+incremental+checkpoints";>FLIP-158</a></li>
+</ul>
+
+<h2 id="large-scale-batch-applications">Large Scale Batch Applications</h2>
 
-<p>The work initiated in Flink 1.9 to bring full Python support to the Table 
API (<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-38%3A+Python+Table+API";>FLIP-38</a>)
 will continue in the upcoming releases, also in close collaboration with the 
Apache Beam community. The next steps include:</p>
+<p>The community is working on making large scale batch execution (parallelism 
in the order of 10,000s)
+simpler (less configuration tuning required) and more performant.</p>
 
 <ul>
   <li>
-    <p>Adding support for Python UDFs (Scalar Functions (UDF), Tabular 
Functions (UDTF) and Aggregate Functions (UDAF)). The details of this 
implementation are defined in <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-58%3A+Flink+Python+User-Defined+Function+for+Table+API";>FLIP-58</a>
 and leverage the <a 
href="https://docs.google.com/document/d/1B9NmaBSKCnMJQp-ibkxvZ_U233Su67c1eYgBhrqWP24/edit#heading=h.khjybycus70";>Apache
 Beam portability framework</a> as a basis for UD [...]
+    <p>Introduce a more scalable batch shuffle. First parts of this have been 
merged, and ongoing efforts are
+to make the memory footprint (JVM direct memory) more predictable, see
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-148%3A+Introduce+Sort-Merge+Based+Blocking+Shuffle+to+Flink";>FLIP-148</a></p>
+
+    <ul>
+      <li><a 
href="https://issues.apache.org/jira/browse/FLINK-20740";>FLINK-20740</a></li>
+      <li><a 
href="https://issues.apache.org/jira/browse/FLINK-19938";>FLINK-19938</a></li>
+    </ul>
   </li>
   <li>
-    <p>Integrating Pandas as the final effort — that is, making functions in 
Pandas directly usable in the Python Table API.</p>
+    <p>Make scheduler faster for higher parallelism: <a 
href="https://issues.apache.org/jira/browse/FLINK-21110";>FLINK-21110</a></p>
   </li>
 </ul>
 
-<h1 id="connectors-and-formats">Connectors and Formats</h1>
+<hr />
+
+<h1 id="python-apis">Python APIs</h1>
+
+<p>Stateful transformation functions for the Python DataStream API:
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-153%3A+Support+state+access+in+Python+DataStream+API";>FLIP-153</a></p>
+
+<hr />
 
-<p>Support for additional connectors and formats is a continuous process.</p>
+<h1 id="documentation">Documentation</h1>
 
-<h1 id="miscellaneous">Miscellaneous</h1>
+<p>There are various dedicated efforts to simplify the maintenance and 
structure (more intuitive navigation/reading)
+of the documentation.</p>
 
 <ul>
-  <li>
-    <p>The Flink code base has been updated to support Java 9 (<a 
href="https://issues.apache.org/jira/browse/FLINK-8033";>FLINK-8033</a>) and 
Java 11 support is underway (<a 
href="https://issues.apache.org/jira/browse/FLINK-10725";>FLINK-10725</a>).</p>
-  </li>
-  <li>
-    <p>To reduce compatibility issues with different Scala versions, we are 
working using Scala
-only in the Scala APIs, but not in the runtime. That removes any Scala 
dependency for all
-Java-only users, and makes it easier for Flink to support different Scala 
versions (<a 
href="https://issues.apache.org/jira/browse/FLINK-11063";>FLINK-11063</a>).</p>
-  </li>
+  <li>Docs Tech Stack: <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-157+Migrate+Flink+Documentation+from+Jekyll+to+Hugo";>FLIP-157</a></li>
+  <li>General Docs Structure: <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-42%3A+Rework+Flink+Documentation";>FLIP-42</a></li>
+  <li>SQL Docs: <a 
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=127405685";>FLIP-60</a></li>
 </ul>
 
+<hr />
+
+<h1 id="miscellaneous-operational-tools">Miscellaneous Operational Tools</h1>
+
+<ul>
+  <li>Allow switching state backends with savepoints: <a 
href="https://issues.apache.org/jira/browse/FLINK-20976";>FLINK-20976</a></li>
+  <li>Support for Savepoints with more properties, like incremental 
savepoints, etc.:
+<a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-47%3A+Checkpoints+vs.+Savepoints";>FLIP-47</a></li>
+</ul>
+
+<hr />
+
+<h1 id="stateful-functions">Stateful Functions</h1>
+
+<p>The Stateful Functions subproject has its own roadmap published under <a 
href="https://statefun.io/";>statefun.io</a>.</p>
+
 
   </div>
 </div>

Reply via email to