[flink] 04/04: [FLINK-18428][API/DataStream] Rename StreamExecutionEnvironment#continuousSource() to StreamExecutionEnvironment#source().

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 369d4a6ce46dddfd2a14ad66ac2e189bd4827158
Author: Jiangjie (Becket) Qin 
AuthorDate: Wed Jun 24 21:22:18 2020 +0800

[FLINK-18428][API/DataStream] Rename 
StreamExecutionEnvironment#continuousSource() to 
StreamExecutionEnvironment#source().

This closes #12766
---
 docs/dev/stream/sources.md  | 6 +++---
 docs/dev/stream/sources.zh.md   | 6 +++---
 .../flink/connector/base/source/reader/CoordinatedSourceITCase.java | 6 +++---
 .../tests/test_stream_execution_environment_completeness.py | 2 +-
 .../flink/streaming/api/environment/StreamExecutionEnvironment.java | 6 +++---
 .../flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java   | 4 ++--
 .../flink/streaming/api/scala/StreamExecutionEnvironment.scala  | 4 ++--
 .../flink/streaming/api/scala/StreamExecutionEnvironmentTest.scala  | 2 +-
 8 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/docs/dev/stream/sources.md b/docs/dev/stream/sources.md
index 669ca8f..3c3db90 100644
--- a/docs/dev/stream/sources.md
+++ b/docs/dev/stream/sources.md
@@ -187,7 +187,7 @@ final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEn
 
 Source mySource = new MySource(...);
 
-DataStream stream = env.continuousSource(
+DataStream stream = env.fromSource(
 mySource,
 WatermarkStrategy.noWatermarks(),
 "MySourceName");
@@ -200,7 +200,7 @@ val env = 
StreamExecutionEnvironment.getExecutionEnvironment()
 
 val mySource = new MySource(...)
 
-val stream = env.continuousSource(
+val stream = env.fromSource(
   mySource,
   WatermarkStrategy.noWatermarks(),
   "MySourceName")
@@ -352,7 +352,7 @@ Apparently, the `SourceReader` implementations can also 
implement their own thre
 The `WatermarkStrategy` is passed to the Source during creation in the 
DataStream API and creates both the 
[TimestampAssigner](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/TimestampAssigner.java)
 and 
[WatermarkGenerator](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkGenerator.java).
 
 {% highlight java %}
-environment.continuousSource(
+environment.fromSource(
 Source source,
 WatermarkStrategy timestampsAndWatermarks,
 String sourceName)
diff --git a/docs/dev/stream/sources.zh.md b/docs/dev/stream/sources.zh.md
index 3f20388..a063ecb 100644
--- a/docs/dev/stream/sources.zh.md
+++ b/docs/dev/stream/sources.zh.md
@@ -187,7 +187,7 @@ final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEn
 
 Source mySource = new MySource(...);
 
-DataStream stream = env.continuousSource(
+DataStream stream = env.fromSource(
 mySource,
 WatermarkStrategy.noWatermarks(),
 "MySourceName");
@@ -200,7 +200,7 @@ val env = 
StreamExecutionEnvironment.getExecutionEnvironment()
 
 val mySource = new MySource(...)
 
-val stream = env.continuousSource(
+val stream = env.fromSource(
   mySource,
   WatermarkStrategy.noWatermarks(),
   "MySourceName")
@@ -352,7 +352,7 @@ Apparently, the `SourceReader` implementations can also 
implement their own thre
 The `WatermarkStrategy` is passed to the Source during creation in the 
DataStream API and creates both the 
[TimestampAssigner](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/TimestampAssigner.java)
 and 
[WatermarkGenerator](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkGenerator.java).
 
 {% highlight java %}
-environment.continuousSource(
+environment.fromSource(
 Source source,
 WatermarkStrategy timestampsAndWatermarks,
 String sourceName)
diff --git 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceITCase.java
 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceITCase.java
index 6582210..3280c38 100644
--- 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceITCase.java
+++ 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceITCase.java
@@ -45,7 +45,7 @@ public class CoordinatedSourceITCase extends AbstractTestBase 
{
public void testEnumeratorReaderCommunication() throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
MockBaseSource source = new MockBaseSource(2, 10, 
Boundedness.BOUNDED);
-   DataStream stream = 

[flink] 01/04: [FLINK-18429][DataStream API] Make CheckpointListener.notifyCheckpointAborted(checkpointId) a default method.

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a60a4a9c71a2648176ca46a25230920674130d01
Author: Stephan Ewen 
AuthorDate: Wed Jun 24 17:26:55 2020 +0200

[FLINK-18429][DataStream API] Make 
CheckpointListener.notifyCheckpointAborted(checkpointId) a default method.

This avoid breaking many user programs that use this interface.
---
 .../main/java/org/apache/flink/runtime/state/CheckpointListener.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
index 13c8e39..ddf2f2d 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
@@ -45,5 +45,5 @@ public interface CheckpointListener {
 * @param checkpointId The ID of the checkpoint that has been aborted.
 * @throws Exception
 */
-   void notifyCheckpointAborted(long checkpointId) throws Exception;
+   default void notifyCheckpointAborted(long checkpointId) throws 
Exception {};
 }



[flink] 02/04: [hotfix][DataStream API] Fix checkstyle issues and JavaDocs in CheckpointListener.

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 7abf7ba1a52eefd2ec2213f2ffb9f0e24bdf4b82
Author: Stephan Ewen 
AuthorDate: Wed Jun 24 17:51:22 2020 +0200

[hotfix][DataStream API] Fix checkstyle issues and JavaDocs in 
CheckpointListener.
---
 .../apache/flink/runtime/state/CheckpointListener.java   | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
index ddf2f2d..adc4baf 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
@@ -15,8 +15,8 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.flink.runtime.state;
 
+package org.apache.flink.runtime.state;
 
 import org.apache.flink.annotation.PublicEvolving;
 
@@ -30,12 +30,13 @@ public interface CheckpointListener {
 
/**
 * This method is called as a notification once a distributed 
checkpoint has been completed.
-* 
-* Note that any exception during this method will not cause the 
checkpoint to
+*
+* Note that any exception during this method will not cause the 
checkpoint to
 * fail any more.
-* 
+*
 * @param checkpointId The ID of the checkpoint that has been completed.
-* @throws Exception
+* @throws Exception This method can propagate exceptions, which leads 
to a failure/recovery for
+*   the task. Not that this will NOT lead to the 
checkpoint being revoked.
 */
void notifyCheckpointComplete(long checkpointId) throws Exception;
 
@@ -43,7 +44,8 @@ public interface CheckpointListener {
 * This method is called as a notification once a distributed 
checkpoint has been aborted.
 *
 * @param checkpointId The ID of the checkpoint that has been aborted.
-* @throws Exception
+* @throws Exception This method can propagate exceptions, which leads 
to a failure/recovery for
+*   the task.
 */
-   default void notifyCheckpointAborted(long checkpointId) throws 
Exception {};
+   default void notifyCheckpointAborted(long checkpointId) throws 
Exception {}
 }



[flink] branch release-1.11 updated (62c7265 -> 369d4a6)

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a change to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 62c7265  [FLINK-18426] Remove incompatible deprecated keys from 
ClusterOptions
 new a60a4a9  [FLINK-18429][DataStream API] Make 
CheckpointListener.notifyCheckpointAborted(checkpointId) a default method.
 new 7abf7ba  [hotfix][DataStream API] Fix checkstyle issues and JavaDocs 
in CheckpointListener.
 new 25cad5f  [FLINK-18430][DataStream API] Classify CheckpointedFunction 
and CheckpointListener as @Public
 new 369d4a6  [FLINK-18428][API/DataStream] Rename 
StreamExecutionEnvironment#continuousSource() to 
StreamExecutionEnvironment#source().

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/dev/stream/sources.md   |  6 +++---
 docs/dev/stream/sources.zh.md|  6 +++---
 .../base/source/reader/CoordinatedSourceITCase.java  |  6 +++---
 ...test_stream_execution_environment_completeness.py |  2 +-
 .../flink/runtime/state/CheckpointListener.java  | 20 +++-
 .../api/checkpoint/CheckpointedFunction.java |  4 ++--
 .../api/environment/StreamExecutionEnvironment.java  |  6 +++---
 .../api/graph/StreamingJobGraphGeneratorTest.java|  4 ++--
 .../api/scala/StreamExecutionEnvironment.scala   |  4 ++--
 .../api/scala/StreamExecutionEnvironmentTest.scala   |  2 +-
 10 files changed, 31 insertions(+), 29 deletions(-)



[flink] 03/04: [FLINK-18430][DataStream API] Classify CheckpointedFunction and CheckpointListener as @Public

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 25cad5fb517bc8a6a6e2a37c78e43c474a99bbd9
Author: Stephan Ewen 
AuthorDate: Wed Jun 24 18:07:22 2020 +0200

[FLINK-18430][DataStream API] Classify CheckpointedFunction and 
CheckpointListener as @Public
---
 .../main/java/org/apache/flink/runtime/state/CheckpointListener.java  | 4 ++--
 .../apache/flink/streaming/api/checkpoint/CheckpointedFunction.java   | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
index adc4baf..a32b597 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
@@ -18,14 +18,14 @@
 
 package org.apache.flink.runtime.state;
 
-import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.annotation.Public;
 
 /**
  * This interface must be implemented by functions/operations that want to 
receive
  * a commit notification once a checkpoint has been completely acknowledged by 
all
  * participants.
  */
-@PublicEvolving
+@Public
 public interface CheckpointListener {
 
/**
diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.java
index 604e7f4..5aeeb34 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.java
@@ -18,7 +18,7 @@
 
 package org.apache.flink.streaming.api.checkpoint;
 
-import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.annotation.Public;
 import org.apache.flink.api.common.functions.RuntimeContext;
 import org.apache.flink.api.common.state.KeyedStateStore;
 import org.apache.flink.api.common.state.OperatorStateStore;
@@ -140,7 +140,7 @@ import 
org.apache.flink.runtime.state.FunctionSnapshotContext;
  * @see ListCheckpointed
  * @see RuntimeContext
  */
-@PublicEvolving
+@Public
 public interface CheckpointedFunction {
 
/**



[flink] 03/04: [FLINK-18430][DataStream API] Classify CheckpointedFunction and CheckpointListener as @Public

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 95b9adbeaa7058c4fc804a5277cbaa958485d63b
Author: Stephan Ewen 
AuthorDate: Wed Jun 24 18:07:22 2020 +0200

[FLINK-18430][DataStream API] Classify CheckpointedFunction and 
CheckpointListener as @Public

This closes #12767
---
 .../main/java/org/apache/flink/runtime/state/CheckpointListener.java  | 4 ++--
 .../apache/flink/streaming/api/checkpoint/CheckpointedFunction.java   | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
index adc4baf..a32b597 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
@@ -18,14 +18,14 @@
 
 package org.apache.flink.runtime.state;
 
-import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.annotation.Public;
 
 /**
  * This interface must be implemented by functions/operations that want to 
receive
  * a commit notification once a checkpoint has been completely acknowledged by 
all
  * participants.
  */
-@PublicEvolving
+@Public
 public interface CheckpointListener {
 
/**
diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.java
index 604e7f4..5aeeb34 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/checkpoint/CheckpointedFunction.java
@@ -18,7 +18,7 @@
 
 package org.apache.flink.streaming.api.checkpoint;
 
-import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.annotation.Public;
 import org.apache.flink.api.common.functions.RuntimeContext;
 import org.apache.flink.api.common.state.KeyedStateStore;
 import org.apache.flink.api.common.state.OperatorStateStore;
@@ -140,7 +140,7 @@ import 
org.apache.flink.runtime.state.FunctionSnapshotContext;
  * @see ListCheckpointed
  * @see RuntimeContext
  */
-@PublicEvolving
+@Public
 public interface CheckpointedFunction {
 
/**



[flink] branch master updated (ca53401 -> 49b5103)

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ca53401  [FLINK-18426] Remove incompatible deprecated keys from 
ClusterOptions
 new 4776813  [FLINK-18429][DataStream API] Make 
CheckpointListener.notifyCheckpointAborted(checkpointId) a default method.
 new b689cea  [hotfix][DataStream API] Fix checkstyle issues and JavaDocs 
in CheckpointListener.
 new 95b9adb  [FLINK-18430][DataStream API] Classify CheckpointedFunction 
and CheckpointListener as @Public
 new 49b5103  [FLINK-18428][API/DataStream] Rename 
StreamExecutionEnvironment#continuousSource() to 
StreamExecutionEnvironment#source().

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/dev/stream/sources.md   |  6 +++---
 docs/dev/stream/sources.zh.md|  6 +++---
 .../base/source/reader/CoordinatedSourceITCase.java  |  6 +++---
 ...test_stream_execution_environment_completeness.py |  2 +-
 .../flink/runtime/state/CheckpointListener.java  | 20 +++-
 .../api/checkpoint/CheckpointedFunction.java |  4 ++--
 .../api/environment/StreamExecutionEnvironment.java  |  6 +++---
 .../api/graph/StreamingJobGraphGeneratorTest.java|  4 ++--
 .../api/scala/StreamExecutionEnvironment.scala   |  4 ++--
 .../api/scala/StreamExecutionEnvironmentTest.scala   |  2 +-
 10 files changed, 31 insertions(+), 29 deletions(-)



[flink] 02/04: [hotfix][DataStream API] Fix checkstyle issues and JavaDocs in CheckpointListener.

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b689cea3c3bcada76dec316ae41054a4a798e4e5
Author: Stephan Ewen 
AuthorDate: Wed Jun 24 17:51:22 2020 +0200

[hotfix][DataStream API] Fix checkstyle issues and JavaDocs in 
CheckpointListener.
---
 .../apache/flink/runtime/state/CheckpointListener.java   | 16 +---
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
index ddf2f2d..adc4baf 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
@@ -15,8 +15,8 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.flink.runtime.state;
 
+package org.apache.flink.runtime.state;
 
 import org.apache.flink.annotation.PublicEvolving;
 
@@ -30,12 +30,13 @@ public interface CheckpointListener {
 
/**
 * This method is called as a notification once a distributed 
checkpoint has been completed.
-* 
-* Note that any exception during this method will not cause the 
checkpoint to
+*
+* Note that any exception during this method will not cause the 
checkpoint to
 * fail any more.
-* 
+*
 * @param checkpointId The ID of the checkpoint that has been completed.
-* @throws Exception
+* @throws Exception This method can propagate exceptions, which leads 
to a failure/recovery for
+*   the task. Not that this will NOT lead to the 
checkpoint being revoked.
 */
void notifyCheckpointComplete(long checkpointId) throws Exception;
 
@@ -43,7 +44,8 @@ public interface CheckpointListener {
 * This method is called as a notification once a distributed 
checkpoint has been aborted.
 *
 * @param checkpointId The ID of the checkpoint that has been aborted.
-* @throws Exception
+* @throws Exception This method can propagate exceptions, which leads 
to a failure/recovery for
+*   the task.
 */
-   default void notifyCheckpointAborted(long checkpointId) throws 
Exception {};
+   default void notifyCheckpointAborted(long checkpointId) throws 
Exception {}
 }



[flink] 01/04: [FLINK-18429][DataStream API] Make CheckpointListener.notifyCheckpointAborted(checkpointId) a default method.

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 4776813cc335080dbe8684f51c3aa0f7f1d774d0
Author: Stephan Ewen 
AuthorDate: Wed Jun 24 17:26:55 2020 +0200

[FLINK-18429][DataStream API] Make 
CheckpointListener.notifyCheckpointAborted(checkpointId) a default method.

This avoid breaking many user programs that use this interface.

This closes #12767
---
 .../main/java/org/apache/flink/runtime/state/CheckpointListener.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
index 13c8e39..ddf2f2d 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
@@ -45,5 +45,5 @@ public interface CheckpointListener {
 * @param checkpointId The ID of the checkpoint that has been aborted.
 * @throws Exception
 */
-   void notifyCheckpointAborted(long checkpointId) throws Exception;
+   default void notifyCheckpointAborted(long checkpointId) throws 
Exception {};
 }



[flink] 04/04: [FLINK-18428][API/DataStream] Rename StreamExecutionEnvironment#continuousSource() to StreamExecutionEnvironment#source().

2020-06-24 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 49b5103299374641662d66b5165441b532206b71
Author: Jiangjie (Becket) Qin 
AuthorDate: Wed Jun 24 21:22:18 2020 +0800

[FLINK-18428][API/DataStream] Rename 
StreamExecutionEnvironment#continuousSource() to 
StreamExecutionEnvironment#source().

This closes #12766
---
 docs/dev/stream/sources.md  | 6 +++---
 docs/dev/stream/sources.zh.md   | 6 +++---
 .../flink/connector/base/source/reader/CoordinatedSourceITCase.java | 6 +++---
 .../tests/test_stream_execution_environment_completeness.py | 2 +-
 .../flink/streaming/api/environment/StreamExecutionEnvironment.java | 6 +++---
 .../flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java   | 4 ++--
 .../flink/streaming/api/scala/StreamExecutionEnvironment.scala  | 4 ++--
 .../flink/streaming/api/scala/StreamExecutionEnvironmentTest.scala  | 2 +-
 8 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/docs/dev/stream/sources.md b/docs/dev/stream/sources.md
index 669ca8f..3c3db90 100644
--- a/docs/dev/stream/sources.md
+++ b/docs/dev/stream/sources.md
@@ -187,7 +187,7 @@ final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEn
 
 Source mySource = new MySource(...);
 
-DataStream stream = env.continuousSource(
+DataStream stream = env.fromSource(
 mySource,
 WatermarkStrategy.noWatermarks(),
 "MySourceName");
@@ -200,7 +200,7 @@ val env = 
StreamExecutionEnvironment.getExecutionEnvironment()
 
 val mySource = new MySource(...)
 
-val stream = env.continuousSource(
+val stream = env.fromSource(
   mySource,
   WatermarkStrategy.noWatermarks(),
   "MySourceName")
@@ -352,7 +352,7 @@ Apparently, the `SourceReader` implementations can also 
implement their own thre
 The `WatermarkStrategy` is passed to the Source during creation in the 
DataStream API and creates both the 
[TimestampAssigner](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/TimestampAssigner.java)
 and 
[WatermarkGenerator](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkGenerator.java).
 
 {% highlight java %}
-environment.continuousSource(
+environment.fromSource(
 Source source,
 WatermarkStrategy timestampsAndWatermarks,
 String sourceName)
diff --git a/docs/dev/stream/sources.zh.md b/docs/dev/stream/sources.zh.md
index 3f20388..a063ecb 100644
--- a/docs/dev/stream/sources.zh.md
+++ b/docs/dev/stream/sources.zh.md
@@ -187,7 +187,7 @@ final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEn
 
 Source mySource = new MySource(...);
 
-DataStream stream = env.continuousSource(
+DataStream stream = env.fromSource(
 mySource,
 WatermarkStrategy.noWatermarks(),
 "MySourceName");
@@ -200,7 +200,7 @@ val env = 
StreamExecutionEnvironment.getExecutionEnvironment()
 
 val mySource = new MySource(...)
 
-val stream = env.continuousSource(
+val stream = env.fromSource(
   mySource,
   WatermarkStrategy.noWatermarks(),
   "MySourceName")
@@ -352,7 +352,7 @@ Apparently, the `SourceReader` implementations can also 
implement their own thre
 The `WatermarkStrategy` is passed to the Source during creation in the 
DataStream API and creates both the 
[TimestampAssigner](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/TimestampAssigner.java)
 and 
[WatermarkGenerator](https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkGenerator.java).
 
 {% highlight java %}
-environment.continuousSource(
+environment.fromSource(
 Source source,
 WatermarkStrategy timestampsAndWatermarks,
 String sourceName)
diff --git 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceITCase.java
 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceITCase.java
index 6582210..3280c38 100644
--- 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceITCase.java
+++ 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/source/reader/CoordinatedSourceITCase.java
@@ -45,7 +45,7 @@ public class CoordinatedSourceITCase extends AbstractTestBase 
{
public void testEnumeratorReaderCommunication() throws Exception {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
MockBaseSource source = new MockBaseSource(2, 10, 
Boundedness.BOUNDED);
-   DataStream stream = 

[flink] branch release-1.11 updated: [FLINK-18426] Remove incompatible deprecated keys from ClusterOptions

2020-06-24 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new 62c7265  [FLINK-18426] Remove incompatible deprecated keys from 
ClusterOptions
62c7265 is described below

commit 62c7265522fcf1b708b4906d5d74e40188a80f28
Author: Till Rohrmann 
AuthorDate: Wed Jun 24 10:31:33 2020 +0200

[FLINK-18426] Remove incompatible deprecated keys from ClusterOptions

ClusterOptions.INITIAL_REGISTRATION_TIMEOUT, MAX_REGISTRATION_TIMEOUT and 
REFUSED_REGISTRATION_DELAY
have incompatible deprecated options of type Duration associated. This 
causes the system to fail
if they are specified. Since the deprecated keys have not been used for a 
very long time, this commit
will remove the deprecated keys from the ClusterOptions.

This closes #12763.
---
 .../apache/flink/configuration/ClusterOptions.java  |  3 ---
 .../RetryingRegistrationConfigurationTest.java  | 21 +
 2 files changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/configuration/ClusterOptions.java 
b/flink-core/src/main/java/org/apache/flink/configuration/ClusterOptions.java
index 857933a..051447a 100644
--- 
a/flink-core/src/main/java/org/apache/flink/configuration/ClusterOptions.java
+++ 
b/flink-core/src/main/java/org/apache/flink/configuration/ClusterOptions.java
@@ -34,14 +34,12 @@ public class ClusterOptions {
public static final ConfigOption INITIAL_REGISTRATION_TIMEOUT = 
ConfigOptions
.key("cluster.registration.initial-timeout")
.defaultValue(100L)
-   .withDeprecatedKeys("taskmanager.initial-registration-pause", 
"taskmanager.registration.initial-backoff")
.withDescription("Initial registration timeout between cluster 
components in milliseconds.");
 
@Documentation.Section(Documentation.Sections.EXPERT_FAULT_TOLERANCE)
public static final ConfigOption MAX_REGISTRATION_TIMEOUT = 
ConfigOptions
.key("cluster.registration.max-timeout")
.defaultValue(3L)
-   .withDeprecatedKeys("taskmanager.max-registration-pause", 
"taskmanager.registration.max-backoff")
.withDescription("Maximum registration timeout between cluster 
components in milliseconds.");
 
@Documentation.Section(Documentation.Sections.EXPERT_FAULT_TOLERANCE)
@@ -54,7 +52,6 @@ public class ClusterOptions {
public static final ConfigOption REFUSED_REGISTRATION_DELAY = 
ConfigOptions
.key("cluster.registration.refused-registration-delay")
.defaultValue(3L)
-   .withDeprecatedKeys("taskmanager.refused-registration-pause", 
"taskmanager.registration.refused-backoff")
.withDescription("The pause made after the registration attempt 
was refused in milliseconds.");
 
@Documentation.Section(Documentation.Sections.EXPERT_FAULT_TOLERANCE)
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/registration/RetryingRegistrationConfigurationTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/registration/RetryingRegistrationConfigurationTest.java
index 86b009d..fc471dd 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/registration/RetryingRegistrationConfigurationTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/registration/RetryingRegistrationConfigurationTest.java
@@ -20,10 +20,13 @@ package org.apache.flink.runtime.registration;
 
 import org.apache.flink.configuration.ClusterOptions;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.TaskManagerOptions;
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Test;
 
+import java.time.Duration;
+
 import static org.hamcrest.Matchers.is;
 import static org.junit.Assert.assertThat;
 
@@ -54,4 +57,22 @@ public class RetryingRegistrationConfigurationTest extends 
TestLogger {

assertThat(retryingRegistrationConfiguration.getErrorDelayMillis(), 
is(errorRegistrationDelay));
}
 
+   @Test
+   public void testConfigurationWithDeprecatedOptions() {
+   final Configuration configuration = new Configuration();
+
+   final Duration refusedRegistrationBackoff = 
Duration.ofMinutes(42L);
+   final Duration registrationMaxBackoff = Duration.ofSeconds(1L);
+   final Duration initialRegistrationBackoff = 
Duration.ofHours(1337L);
+
+   
configuration.set(TaskManagerOptions.REFUSED_REGISTRATION_BACKOFF, 
refusedRegistrationBackoff);
+   configuration.set(TaskManagerOptions.REGISTRATION_MAX_BACKOFF, 
registrationMaxBackoff);
+   
configuration.set(TaskManagerOptions.INITIAL_REGISTRATION_BACKOFF, 

[flink] branch master updated (7def95b -> ca53401)

2020-06-24 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 7def95b  [FLINK-18425][table] Convert object arrays to primitive 
arrays in GenericArrayData
 add ca53401  [FLINK-18426] Remove incompatible deprecated keys from 
ClusterOptions

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/configuration/ClusterOptions.java  |  3 ---
 .../RetryingRegistrationConfigurationTest.java  | 21 +
 2 files changed, 21 insertions(+), 3 deletions(-)



[flink] branch release-1.11 updated: [FLINK-18425][table] Convert object arrays to primitive arrays in GenericArrayData

2020-06-24 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new a05972f  [FLINK-18425][table] Convert object arrays to primitive 
arrays in GenericArrayData
a05972f is described below

commit a05972f98d3b939857d0c0911f68d97186d1cada
Author: Timo Walther 
AuthorDate: Wed Jun 24 10:27:47 2020 +0200

[FLINK-18425][table] Convert object arrays to primitive arrays in 
GenericArrayData

This closes #12762.
---
 .../apache/flink/table/data/GenericArrayData.java  | 57 +++---
 .../flink/table/data/binary/BinaryArrayData.java   |  2 +-
 .../table/data/DataStructureConvertersTest.java| 12 -
 3 files changed, 62 insertions(+), 9 deletions(-)

diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/GenericArrayData.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/GenericArrayData.java
index 0809012..9406056 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/GenericArrayData.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/GenericArrayData.java
@@ -244,39 +244,82 @@ public final class GenericArrayData implements ArrayData {
// Conversion Utilities
// 
--
 
+   private boolean anyNull() {
+   for (Object element : (Object[]) array) {
+   if (element == null) {
+   return true;
+   }
+   }
+   return false;
+   }
+
+   private void checkNoNull() {
+   if (anyNull()) {
+   throw new RuntimeException("Primitive array must not 
contain a null value.");
+   }
+   }
+
@Override
public boolean[] toBooleanArray() {
-   return (boolean[]) array;
+   if (isPrimitiveArray) {
+   return (boolean[]) array;
+   }
+   checkNoNull();
+   return ArrayUtils.toPrimitive((Boolean[]) array);
}
 
@Override
public byte[] toByteArray() {
-   return (byte[]) array;
+   if (isPrimitiveArray) {
+   return (byte[]) array;
+   }
+   checkNoNull();
+   return ArrayUtils.toPrimitive((Byte[]) array);
}
 
@Override
public short[] toShortArray() {
-   return (short[]) array;
+   if (isPrimitiveArray) {
+   return (short[]) array;
+   }
+   checkNoNull();
+   return ArrayUtils.toPrimitive((Short[]) array);
}
 
@Override
public int[] toIntArray() {
-   return (int[]) array;
+   if (isPrimitiveArray) {
+   return (int[]) array;
+   }
+   checkNoNull();
+   return ArrayUtils.toPrimitive((Integer[]) array);
}
 
@Override
public long[] toLongArray() {
-   return (long[]) array;
+   if (isPrimitiveArray) {
+   return (long[]) array;
+   }
+   checkNoNull();
+   return ArrayUtils.toPrimitive((Long[]) array);
}
 
@Override
public float[] toFloatArray() {
-   return (float[]) array;
+   if (isPrimitiveArray) {
+   return (float[]) array;
+   }
+   checkNoNull();
+   return ArrayUtils.toPrimitive((Float[]) array);
}
 
@Override
public double[] toDoubleArray() {
-   return (double[]) array;
+   if (isPrimitiveArray) {
+   return (double[]) array;
+   }
+   checkNoNull();
+   return ArrayUtils.toPrimitive((Double[]) array);
}
 }
 
diff --git 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/binary/BinaryArrayData.java
 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/binary/BinaryArrayData.java
index 40526d5..06374d0 100644
--- 
a/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/binary/BinaryArrayData.java
+++ 
b/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/binary/BinaryArrayData.java
@@ -434,7 +434,7 @@ public final class BinaryArrayData extends BinarySection 
implements ArrayData, T
 
private void checkNoNull() {
if (anyNull()) {
-   throw new RuntimeException("Array can not have null 
value!");
+   throw new RuntimeException("Primitive array must not 

[flink] branch master updated (00863a2 -> 7def95b)

2020-06-24 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 00863a2  [FLINK-17639] Document which FileSystems are supported by the 
StreamingFileSink
 add 7def95b  [FLINK-18425][table] Convert object arrays to primitive 
arrays in GenericArrayData

No new revisions were added by this update.

Summary of changes:
 .../apache/flink/table/data/GenericArrayData.java  | 57 +++---
 .../flink/table/data/binary/BinaryArrayData.java   |  2 +-
 .../table/data/DataStructureConvertersTest.java| 12 -
 3 files changed, 62 insertions(+), 9 deletions(-)



[flink] branch release-1.10 updated: [FLINK-17639] Document FileSystems supported by the StreamingFileSink

2020-06-24 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a commit to branch release-1.10
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.10 by this push:
 new b50704b  [FLINK-17639] Document FileSystems supported by the 
StreamingFileSink
b50704b is described below

commit b50704be84c800a68a00e56fd820d46fd0820904
Author: GuoWei Ma 
AuthorDate: Mon Jun 22 17:10:52 2020 +0800

[FLINK-17639] Document FileSystems supported by the StreamingFileSink
---
 docs/dev/connectors/streamfile_sink.md| 3 +++
 docs/dev/connectors/streamfile_sink.zh.md | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/docs/dev/connectors/streamfile_sink.md 
b/docs/dev/connectors/streamfile_sink.md
index 2e302a6..e1e8dd2 100644
--- a/docs/dev/connectors/streamfile_sink.md
+++ b/docs/dev/connectors/streamfile_sink.md
@@ -433,6 +433,9 @@ Given this, when trying to restore from an old 
checkpoint/savepoint which assume
 by subsequent successful checkpoints, Flink will refuse to resume and it will 
throw an exception as it cannot locate the 
 in-progress file.
 
+Important Note 4: Currently, the 
`StreamingFileSink` only supports three filesystems: 
+HDFS, S3, and Local. Flink will throw an exception when using an unsupported 
filesystem at runtime.
+
 ### S3-specific
 
 Important Note 1: For S3, the 
`StreamingFileSink`
diff --git a/docs/dev/connectors/streamfile_sink.zh.md 
b/docs/dev/connectors/streamfile_sink.zh.md
index bd74bef..55a8d74 100644
--- a/docs/dev/connectors/streamfile_sink.zh.md
+++ b/docs/dev/connectors/streamfile_sink.zh.md
@@ -405,6 +405,8 @@ Hadoop 2.7 之前的版本不支持这个方法,因此 Flink 会报异常。
 重要提示 3: Flink 以及 `StreamingFileSink` 
不会覆盖已经提交的数据。因此如果尝试从一个包含 in-progress 文件的旧 checkpoint/savepoint 恢复,
 且这些 in-progress 文件会被接下来的成功 checkpoint 提交,Flink 会因为无法找到 in-progress 
文件而抛异常,从而恢复失败。
 
+重要提示 4: 目前 `StreamingFileSink` 
只支持三种文件系统: HDFS、S3和Local。如果配置了不支持的文件系统,在执行的时候 Flink 会抛出异常。
+
 ###  S3 特有的注意事项
 
 重要提示 1: 对于 S3,`StreamingFileSink`  
只支持基于 [Hadoop](https://hadoop.apache.org/) 



[flink] branch release-1.11 updated: [FLINK-17639] Document FileSystems supported by the StreamingFileSink

2020-06-24 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new 8c7f366  [FLINK-17639] Document FileSystems supported by the 
StreamingFileSink
8c7f366 is described below

commit 8c7f366d8abf4106bd02dba1d4574be06d23737e
Author: GuoWei Ma 
AuthorDate: Mon Jun 22 17:10:52 2020 +0800

[FLINK-17639] Document FileSystems supported by the StreamingFileSink
---
 docs/dev/connectors/streamfile_sink.md| 3 +++
 docs/dev/connectors/streamfile_sink.zh.md | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/docs/dev/connectors/streamfile_sink.md 
b/docs/dev/connectors/streamfile_sink.md
index 4e7c4b2..121a02d 100644
--- a/docs/dev/connectors/streamfile_sink.md
+++ b/docs/dev/connectors/streamfile_sink.md
@@ -733,6 +733,9 @@ Given this, when trying to restore from an old 
checkpoint/savepoint which assume
 by subsequent successful checkpoints, Flink will refuse to resume and it will 
throw an exception as it cannot locate the 
 in-progress file.
 
+Important Note 4: Currently, the 
`StreamingFileSink` only supports three filesystems: 
+HDFS, S3, and Local. Flink will throw an exception when using an unsupported 
filesystem at runtime.
+
 ### S3-specific
 
 Important Note 1: For S3, the 
`StreamingFileSink`
diff --git a/docs/dev/connectors/streamfile_sink.zh.md 
b/docs/dev/connectors/streamfile_sink.zh.md
index 70f20bd..4e7fe46 100644
--- a/docs/dev/connectors/streamfile_sink.zh.md
+++ b/docs/dev/connectors/streamfile_sink.zh.md
@@ -705,6 +705,8 @@ Hadoop 2.7 之前的版本不支持这个方法,因此 Flink 会报异常。
 重要提示 3: Flink 以及 `StreamingFileSink` 
不会覆盖已经提交的数据。因此如果尝试从一个包含 in-progress 文件的旧 checkpoint/savepoint 恢复,
 且这些 in-progress 文件会被接下来的成功 checkpoint 提交,Flink 会因为无法找到 in-progress 
文件而抛异常,从而恢复失败。
 
+重要提示 4: 目前 `StreamingFileSink` 
只支持三种文件系统: HDFS、S3和Local。如果配置了不支持的文件系统,在执行的时候 Flink 会抛出异常。
+
 ###  S3 特有的注意事项
 
 重要提示 1: 对于 S3,`StreamingFileSink`  
只支持基于 [Hadoop](https://hadoop.apache.org/) 



[flink] branch master updated (44ea896 -> 00863a2)

2020-06-24 Thread kkloudas
This is an automated email from the ASF dual-hosted git repository.

kkloudas pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 44ea896  [FLINK-14938] Use ConcurrentLinkedQueue in 
BufferingNoOpRequestIndexer
 add 00863a2  [FLINK-17639] Document which FileSystems are supported by the 
StreamingFileSink

No new revisions were added by this update.

Summary of changes:
 docs/dev/connectors/streamfile_sink.md| 3 +++
 docs/dev/connectors/streamfile_sink.zh.md | 2 ++
 2 files changed, 5 insertions(+)



[flink] annotated tag release-1.11.0-rc3 updated (8a46b2d -> 9868be0)

2020-06-24 Thread zhijiang
This is an automated email from the ASF dual-hosted git repository.

zhijiang pushed a change to annotated tag release-1.11.0-rc3
in repository https://gitbox.apache.org/repos/asf/flink.git.


*** WARNING: tag release-1.11.0-rc3 was modified! ***

from 8a46b2d  (commit)
  to 9868be0  (tag)
 tagging 8a46b2d4b5dd319735d27dd87fee29fcd8519872 (commit)
 replaces pre-apache-rename
  by zhijiang
  on Wed Jun 24 11:02:48 2020 +0800

- Log -
release-1.11.0-rc3
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAABAgAGBQJe8sJYAAoJEAZTwKLOoA0OzrIQAIuZChhfVtGrBOxUFO2xAEP/
t8g4KBDoHNfyMijDA1C3HYD/31Fy0ugBFA9KWugvkPABABijfdlvkYm9g7r3gKyv
1rNpsAP6O+/GGJkpq6fw0QCAkVWHRpc6R91/DISkFNKoK/b2a1wsnByESFtH0ltT
j5YWcDNoG7SHoHek/l9ogl+2gSU0/mOXEopN8n79fq2sAhBb+iMDgOiEwbHyOMW1
RRhhCaKXGjfh9W2Fjq79NFElDmc2Z4loMBf2apNxInrwioxi0gvuEH9v/0n8u7Je
hzC8CvrRt4+jdTBYHK1A98YGYuLnsHdrJKUgSLYQlQqcypmrhnDHgEZh0OVzdMa1
4rlBxPR1MlLP0KXAkbANoWVz07s3QRf4I5ZH8500jlmoPfh1bH53WNtm4OTZsVQW
dIZcJvHmRdKkDsyn3Muptl8may4jmVhc55JKXO6Mo23AWH6E2K3g8hqxdOWi9r8g
3adQKwQMfuHUajOI0PyoN8FwaaCCsywvnZZ7SQmES4+jyTByFAlV/SFJLTusTITe
fxLmjdU6DY1oClKkiD6s/ItKKYfm+uBLHJmTeVJoT91NOoOlQib3YRnPkJbGpbK5
FhDi4HTRvBOQmPuk6GDBfWxPrutk1gBkqMZBgUY27Veu+1UR/9wwTWIyKVF2hgOs
jzciOlpL1sk4vRFxR0qz
=mV27
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:



svn commit: r40165 - in /dev/flink/flink-1.11.0-rc3: ./ python/

2020-06-24 Thread liyu
Author: liyu
Date: Wed Jun 24 10:45:34 2020
New Revision: 40165

Log:
flink-1.11.0-rc3

Added:
dev/flink/flink-1.11.0-rc3/
dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz   (with props)
dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz.asc   (with 
props)
dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz.sha512
dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.12.tgz   (with props)
dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.12.tgz.asc   (with 
props)
dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.12.tgz.sha512
dev/flink/flink-1.11.0-rc3/flink-1.11.0-src.tgz   (with props)
dev/flink/flink-1.11.0-rc3/flink-1.11.0-src.tgz.asc   (with props)
dev/flink/flink-1.11.0-rc3/flink-1.11.0-src.tgz.sha512
dev/flink/flink-1.11.0-rc3/python/
dev/flink/flink-1.11.0-rc3/python/apache-flink-1.11.0.tar.gz   (with props)
dev/flink/flink-1.11.0-rc3/python/apache-flink-1.11.0.tar.gz.asc   (with 
props)
dev/flink/flink-1.11.0-rc3/python/apache-flink-1.11.0.tar.gz.sha512

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp35-cp35m-linux_x86_64.whl
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp35-cp35m-linux_x86_64.whl.asc
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp35-cp35m-linux_x86_64.whl.sha512

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp35-cp35m-macosx_10_6_x86_64.whl
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp35-cp35m-macosx_10_6_x86_64.whl.asc
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp35-cp35m-macosx_10_6_x86_64.whl.sha512

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp36-cp36m-linux_x86_64.whl
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp36-cp36m-linux_x86_64.whl.asc
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp36-cp36m-linux_x86_64.whl.sha512

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp36-cp36m-macosx_10_9_x86_64.whl
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp36-cp36m-macosx_10_9_x86_64.whl.asc
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp36-cp36m-macosx_10_9_x86_64.whl.sha512

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp37-cp37m-linux_x86_64.whl
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp37-cp37m-linux_x86_64.whl.asc
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp37-cp37m-linux_x86_64.whl.sha512

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp37-cp37m-macosx_10_9_x86_64.whl
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp37-cp37m-macosx_10_9_x86_64.whl.asc
   (with props)

dev/flink/flink-1.11.0-rc3/python/apache_flink-1.11.0-cp37-cp37m-macosx_10_9_x86_64.whl.sha512

Added: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz
==
Binary file - no diff available.

Propchange: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz
--
svn:mime-type = application/x-gzip

Added: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz.asc
==
Binary file - no diff available.

Propchange: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz.asc
--
svn:mime-type = application/pgp-signature

Added: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz.sha512
==
--- dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz.sha512 (added)
+++ dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.11.tgz.sha512 Wed Jun 
24 10:45:34 2020
@@ -0,0 +1 @@
+b9b5f4c4fbb555d204539824636fc5b3c9307f33ab9d0c68f54ca3757fcaea41581d7102a0fe691e91e96b3389720768839b3f0778ca4092d3db86abc7ca6a4f
  flink-1.11.0-bin-scala_2.11.tgz

Added: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.12.tgz
==
Binary file - no diff available.

Propchange: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.12.tgz
--
svn:mime-type = application/x-gzip

Added: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.12.tgz.asc
==
Binary file - no diff available.

Propchange: dev/flink/flink-1.11.0-rc3/flink-1.11.0-bin-scala_2.12.tgz.asc
--
svn:mime-type = application/pgp-signature

Added: 

[flink] branch release-1.11 updated: [FLINK-14938] Use ConcurrentLinkedQueue in BufferingNoOpRequestIndexer

2020-06-24 Thread aljoscha
This is an automated email from the ASF dual-hosted git repository.

aljoscha pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new bb1f162  [FLINK-14938] Use ConcurrentLinkedQueue in 
BufferingNoOpRequestIndexer
bb1f162 is described below

commit bb1f162d8e7cf3d24c01679492e53b3a041cdde9
Author: yushengnan 
AuthorDate: Fri Jun 19 21:45:52 2020 +0800

[FLINK-14938] Use ConcurrentLinkedQueue in BufferingNoOpRequestIndexer

This solves the problem of concurrent modification when re-adding ES
index requests from a failure handler.
---
 .../connectors/elasticsearch/BufferingNoOpRequestIndexer.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git 
a/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/BufferingNoOpRequestIndexer.java
 
b/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/BufferingNoOpRequestIndexer.java
index e639b82..07341da 100644
--- 
a/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/BufferingNoOpRequestIndexer.java
+++ 
b/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/BufferingNoOpRequestIndexer.java
@@ -27,9 +27,8 @@ import org.elasticsearch.action.update.UpdateRequest;
 
 import javax.annotation.concurrent.NotThreadSafe;
 
-import java.util.ArrayList;
 import java.util.Collections;
-import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
 
 /**
  * Implementation of a {@link RequestIndexer} that buffers {@link 
ActionRequest ActionRequests}
@@ -39,10 +38,10 @@ import java.util.List;
 @NotThreadSafe
 class BufferingNoOpRequestIndexer implements RequestIndexer {
 
-   private List bufferedRequests;
+   private ConcurrentLinkedQueue bufferedRequests;
 
BufferingNoOpRequestIndexer() {
-   this.bufferedRequests = new ArrayList<>(10);
+   this.bufferedRequests = new 
ConcurrentLinkedQueue();
}
 
@Override



[flink] branch master updated: [FLINK-14938] Use ConcurrentLinkedQueue in BufferingNoOpRequestIndexer

2020-06-24 Thread aljoscha
This is an automated email from the ASF dual-hosted git repository.

aljoscha pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 44ea896  [FLINK-14938] Use ConcurrentLinkedQueue in 
BufferingNoOpRequestIndexer
44ea896 is described below

commit 44ea896d6c5bbfcabb79d9649dd15c834741c4b9
Author: yushengnan 
AuthorDate: Fri Jun 19 21:45:52 2020 +0800

[FLINK-14938] Use ConcurrentLinkedQueue in BufferingNoOpRequestIndexer

This solves the problem of concurrent modification when re-adding ES
index requests from a failure handler.
---
 .../connectors/elasticsearch/BufferingNoOpRequestIndexer.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git 
a/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/BufferingNoOpRequestIndexer.java
 
b/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/BufferingNoOpRequestIndexer.java
index e639b82..07341da 100644
--- 
a/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/BufferingNoOpRequestIndexer.java
+++ 
b/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/BufferingNoOpRequestIndexer.java
@@ -27,9 +27,8 @@ import org.elasticsearch.action.update.UpdateRequest;
 
 import javax.annotation.concurrent.NotThreadSafe;
 
-import java.util.ArrayList;
 import java.util.Collections;
-import java.util.List;
+import java.util.concurrent.ConcurrentLinkedQueue;
 
 /**
  * Implementation of a {@link RequestIndexer} that buffers {@link 
ActionRequest ActionRequests}
@@ -39,10 +38,10 @@ import java.util.List;
 @NotThreadSafe
 class BufferingNoOpRequestIndexer implements RequestIndexer {
 
-   private List bufferedRequests;
+   private ConcurrentLinkedQueue bufferedRequests;
 
BufferingNoOpRequestIndexer() {
-   this.bufferedRequests = new ArrayList<>(10);
+   this.bufferedRequests = new 
ConcurrentLinkedQueue();
}
 
@Override



[flink] branch master updated (6a6ad7c -> 44ea896)

2020-06-24 Thread aljoscha
This is an automated email from the ASF dual-hosted git repository.

aljoscha pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6a6ad7c  [FLINK-12489][Mesos] Parametrize network resource by name
 add 44ea896  [FLINK-14938] Use ConcurrentLinkedQueue in 
BufferingNoOpRequestIndexer

No new revisions were added by this update.

Summary of changes:
 .../connectors/elasticsearch/BufferingNoOpRequestIndexer.java  | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)



[flink] branch master updated (b6b5481 -> 6a6ad7c)

2020-06-24 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b6b5481  [FLINK-18420][tests] Disable failed test SQLClientHBaseITCase 
in java 11
 add 61946ca  [FLINK-12489][Mesos] Add network resource paramter
 add 6a6ad7c  [FLINK-12489][Mesos] Parametrize network resource by name

No new revisions were added by this update.

Summary of changes:
 .../mesos_task_manager_configuration.html  | 12 +++
 .../clusterframework/LaunchableMesosWorker.java|  6 +-
 .../MesosTaskManagerParameters.java| 24 ++
 .../org/apache/flink/mesos/scheduler/Offer.java|  6 +-
 .../org/apache/flink/mesos/util/MesosUtils.java|  5 +++--
 .../flink/mesos/scheduler/LaunchCoordinator.scala  |  6 +-
 .../clusterframework/MesosResourceManagerTest.java |  2 +-
 7 files changed, 55 insertions(+), 6 deletions(-)



[flink] branch master updated (6648492 -> b6b5481)

2020-06-24 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 6648492  [FLINK-17579] Allow user to set the TaskManager ResourceID in 
standalone mode
 add b6b5481  [FLINK-18420][tests] Disable failed test SQLClientHBaseITCase 
in java 11

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/flink/tests/util/hbase/SQLClientHBaseITCase.java   | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)



[flink] branch master updated (77fd975 -> 6648492)

2020-06-24 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 77fd975  [FLINK-18353] Update untranslated Chinese memory config doc 
files
 add e716293  [hotfix] Fix the access modifiers in TaskManagerRunner
 add b5e5063  [hotfix] Simplify the lambda expression in TaskManagerRunner
 add 6648492  [FLINK-17579] Allow user to set the TaskManager ResourceID in 
standalone mode

No new revisions were added by this update.

Summary of changes:
 .../generated/all_taskmanager_section.html |  6 
 .../generated/task_manager_configuration.html  |  6 
 .../flink/configuration/TaskManagerOptions.java| 12 +++
 .../kubernetes/KubernetesResourceManager.java  |  6 +++-
 .../decorators/InitTaskManagerDecorator.java   |  5 ---
 .../taskmanager/KubernetesTaskExecutorRunner.java  |  9 +
 .../apache/flink/kubernetes/utils/Constants.java   |  2 --
 .../kubernetes/KubernetesResourceManagerTest.java  |  5 ---
 .../decorators/InitTaskManagerDecoratorTest.java   |  1 -
 .../factory/KubernetesTaskManagerFactoryTest.java  |  2 +-
 .../mesos/entrypoint/MesosTaskExecutorRunner.java  | 10 +-
 .../clusterframework/LaunchableMesosWorker.java|  4 +--
 .../runtime/clusterframework/MesosConfigKeys.java  |  5 ---
 .../ActiveResourceManagerFactory.java  |  5 ++-
 .../runtime/taskexecutor/TaskManagerRunner.java| 41 ++
 .../taskexecutor/TaskManagerRunnerTest.java| 39 ++--
 ...tractTaskManagerProcessFailureRecoveryTest.java |  3 +-
 .../JobManagerHAProcessFailureRecoveryITCase.java  |  3 +-
 .../org/apache/flink/yarn/YarnResourceManager.java |  8 ++---
 .../apache/flink/yarn/YarnTaskExecutorRunner.java  | 10 ++
 20 files changed, 107 insertions(+), 75 deletions(-)



[flink] branch release-1.11 updated: [FLINK-18351][table] Fix ModuleManager creates a lot of duplicate/similar log messages

2020-06-24 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new 46a8ab9  [FLINK-18351][table] Fix ModuleManager creates a lot of 
duplicate/similar log messages
46a8ab9 is described below

commit 46a8ab9628a3297dccbea2de8cdd2f5f9d7cd5f9
Author: Shengkai <33114724+fsk...@users.noreply.github.com>
AuthorDate: Wed Jun 24 14:28:52 2020 +0800

[FLINK-18351][table] Fix ModuleManager creates a lot of duplicate/similar 
log messages

This closes #12757
---
 .../src/main/java/org/apache/flink/table/module/ModuleManager.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java
index 2a885d9..625928f 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java
@@ -126,7 +126,7 @@ public class ModuleManager {
.findFirst();
 
if (result.isPresent()) {
-   LOG.info("Got FunctionDefinition '{}' from '{}' 
module.", name, result.get().getKey());
+   LOG.debug("Got FunctionDefinition '{}' from '{}' 
module.", name, result.get().getKey());
 
return 
result.get().getValue().getFunctionDefinition(name);
} else {



[flink] branch master updated (b37b8b0 -> 77fd975)

2020-06-24 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b37b8b0  [FLINK-18351][table] Fix ModuleManager creates a lot of 
duplicate/similar log messages
 add 77fd975  [FLINK-18353] Update untranslated Chinese memory config doc 
files

No new revisions were added by this update.

Summary of changes:
 docs/ops/memory/mem_setup.zh.md| 17 ++---
 docs/ops/memory/mem_setup_jobmanager.zh.md |  6 --
 2 files changed, 14 insertions(+), 9 deletions(-)



[flink] branch release-1.11 updated: [FLINK-18353] Update untranslated Chinese memory config doc files

2020-06-24 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new ba26c86  [FLINK-18353] Update untranslated Chinese memory config doc 
files
ba26c86 is described below

commit ba26c86e34cc6b769e40bedda77bf6c4921b3f09
Author: Andrey Zagrebin 
AuthorDate: Tue Jun 23 17:33:46 2020 +0300

[FLINK-18353] Update untranslated Chinese memory config doc files
---
 docs/ops/memory/mem_setup.zh.md| 17 ++---
 docs/ops/memory/mem_setup_jobmanager.zh.md |  6 --
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/docs/ops/memory/mem_setup.zh.md b/docs/ops/memory/mem_setup.zh.md
index 406caaa..33b7c62 100644
--- a/docs/ops/memory/mem_setup.zh.md
+++ b/docs/ops/memory/mem_setup.zh.md
@@ -86,7 +86,7 @@ which do not have default values, have to be configured 
explicitly:
 
 
 Note Explicitly configuring both *total 
process memory* and *total Flink memory*
-is not recommended. It may lead to deployment failures due to potential memory 
configuration conflicts. 
+is not recommended. It may lead to deployment failures due to potential memory 
configuration conflicts.
 Configuring other memory components also requires caution as it can produce 
further configuration conflicts.
 
 ## JVM Parameters
@@ -94,13 +94,16 @@ Configuring other memory components also requires caution 
as it can produce furt
 Flink explicitly adds the following memory related JVM arguments while 
starting its processes, based on the configured
 or derived memory component sizes:
 
-| **JVM Arguments** | **Value for 
TaskManager** | **Value for JobManager** |
-| : | 
:- | 
: |
-| *-Xmx* and *-Xms* | Framework + Task Heap Memory 
  | JVM Heap Memory   |
-| *-XX:MaxDirectMemorySize* | Framework + Task Off-heap (*) + 
Network Memory | Off-heap Memory (*)   |
-| *-XX:MaxMetaspaceSize*| JVM Metaspace
  | JVM Metaspace |
+| **JVM Arguments**
  | **Value for TaskManager**  | 
**Value for JobManager**  |
+| 
:-
 | :- | 
: |
+| *-Xmx* and *-Xms*
  | Framework + Task Heap Memory   | JVM Heap 
Memory   |
+| *-XX:MaxDirectMemorySize*(always added only for TaskManager, see note 
for JobManager) | Framework + Task Off-heap (\*) + Network Memory| Off-heap 
Memory (\*),(\*\*)   |
+| *-XX:MaxMetaspaceSize*   
  | JVM Metaspace  | JVM 
Metaspace |
 {:.table-bordered}
-(*) Notice, that the native non-direct usage of memory in user code can be 
also accounted for as a part of the off-heap memory.
+(\*) Notice, that the native non-direct usage of memory in user code can be 
also accounted for as a part of the off-heap memory.
+
+(\*\*) The *JVM Direct memory limit* is added for JobManager process only if 
the corresponding option
+[`jobmanager.memory.enable-jvm-direct-memory-limit`](../config.html#jobmanager-memory-enable-jvm-direct-memory-limit)
 is set.
 
 
 Check also the detailed memory model for 
[TaskManager](mem_setup_tm.html#detailed-memory-model) and
diff --git a/docs/ops/memory/mem_setup_jobmanager.zh.md 
b/docs/ops/memory/mem_setup_jobmanager.zh.md
index c564da9..f0125d1 100644
--- a/docs/ops/memory/mem_setup_jobmanager.zh.md
+++ b/docs/ops/memory/mem_setup_jobmanager.zh.md
@@ -74,8 +74,10 @@ The Flink scripts and CLI set the *JVM Heap* size via the 
JVM parameters *-Xms*
 
 ### Configure Off-heap Memory
 
-The *Off-heap* memory component accounts for any type of *JVM direct memory* 
and *native memory* usage. Therefore, it
-is also set via the corresponding JVM argument: *-XX:MaxDirectMemorySize*, see 
also [JVM parameters](mem_setup.html#jvm-parameters).
+The *Off-heap* memory component accounts for any type of *JVM direct memory* 
and *native memory* usage. Therefore,
+you can also enable the *JVM Direct Memory* limit by setting the 
[`jobmanager.memory.enable-jvm-direct-memory-limit`](../config.html#jobmanager-memory-enable-jvm-direct-memory-limit)
 option.
+If this option is configured, Flink will set the limit to the *Off-heap* 
memory size via the corresponding JVM 

[flink] branch master updated (b37b8b0 -> 77fd975)

2020-06-24 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b37b8b0  [FLINK-18351][table] Fix ModuleManager creates a lot of 
duplicate/similar log messages
 add 77fd975  [FLINK-18353] Update untranslated Chinese memory config doc 
files

No new revisions were added by this update.

Summary of changes:
 docs/ops/memory/mem_setup.zh.md| 17 ++---
 docs/ops/memory/mem_setup_jobmanager.zh.md |  6 --
 2 files changed, 14 insertions(+), 9 deletions(-)



[flink] branch master updated: [FLINK-18351][table] Fix ModuleManager creates a lot of duplicate/similar log messages

2020-06-24 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b37b8b0  [FLINK-18351][table] Fix ModuleManager creates a lot of 
duplicate/similar log messages
b37b8b0 is described below

commit b37b8b09930a16b6ec772996a09eabdbc0c98853
Author: Shengkai <33114724+fsk...@users.noreply.github.com>
AuthorDate: Wed Jun 24 14:28:52 2020 +0800

[FLINK-18351][table] Fix ModuleManager creates a lot of duplicate/similar 
log messages

This closes #12757
---
 .../src/main/java/org/apache/flink/table/module/ModuleManager.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java
index 2a885d9..625928f 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java
@@ -126,7 +126,7 @@ public class ModuleManager {
.findFirst();
 
if (result.isPresent()) {
-   LOG.info("Got FunctionDefinition '{}' from '{}' 
module.", name, result.get().getKey());
+   LOG.debug("Got FunctionDefinition '{}' from '{}' 
module.", name, result.get().getKey());
 
return 
result.get().getValue().getFunctionDefinition(name);
} else {



[flink] branch master updated (afebdc2 -> b37b8b0)

2020-06-24 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from afebdc2  [FLINK-18194][walkthroughs] Document new table walkthrough
 add b37b8b0  [FLINK-18351][table] Fix ModuleManager creates a lot of 
duplicate/similar log messages

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/flink/table/module/ModuleManager.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)