[GitHub] [flink] zentol merged pull request #18800: [FLINK-26187][docs] Chinese docs only redirect for chinese urls

2022-02-16 Thread GitBox


zentol merged pull request #18800:
URL: https://github.com/apache/flink/pull/18800


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18718:
URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573


   
   ## CI report:
   
   * 69d42893d3874932d77329e758818b743c528489 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * 9b492093f225a63b04fd8de96adeffec551bf31c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31710)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18718:
URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573


   
   ## CI report:
   
   * 8dbb8256563d48a7bdb13489d6670ae6215eebe1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31611)
 
   * 69d42893d3874932d77329e758818b743c528489 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718)
 
   * b263f91c0e716e39a9fd6ee6999f4d9b8fbe40b9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25585) JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed on the azure

2022-02-16 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25585:

Priority: Critical  (was: Major)

> JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed 
> on the azure
> -
>
> Key: FLINK-25585
> URL: https://issues.apache.org/jira/browse/FLINK-25585
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.14.2
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-01-08T02:39:54.7103772Z Jan 08 02:39:54 [ERROR] Tests run: 2, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 349.282 s <<< FAILURE! - in 
> org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase
> 2022-01-08T02:39:54.7105233Z Jan 08 02:39:54 [ERROR] 
> testDispatcherProcessFailure[ExecutionMode BATCH]  Time elapsed: 302.006 s  
> <<< FAILURE!
> 2022-01-08T02:39:54.7106478Z Jan 08 02:39:54 java.lang.AssertionError: The 
> program encountered a RuntimeException : 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error 
> while waiting for job to be initialized
> 2022-01-08T02:39:54.7107409Z Jan 08 02:39:54  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-01-08T02:39:54.7108084Z Jan 08 02:39:54  at 
> org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure(JobManagerHAProcessFailureRecoveryITCase.java:383)
> 2022-01-08T02:39:54.7108952Z Jan 08 02:39:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-08T02:39:54.7109491Z Jan 08 02:39:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-08T02:39:54.7110107Z Jan 08 02:39:54  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-08T02:39:54.7110772Z Jan 08 02:39:54  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-08T02:39:54.7111594Z Jan 08 02:39:54  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-08T02:39:54.7112510Z Jan 08 02:39:54  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-08T02:39:54.7113734Z Jan 08 02:39:54  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-08T02:39:54.7114673Z Jan 08 02:39:54  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-08T02:39:54.7115423Z Jan 08 02:39:54  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-08T02:39:54.7116011Z Jan 08 02:39:54  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-01-08T02:39:54.7116586Z Jan 08 02:39:54  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-08T02:39:54.7117154Z Jan 08 02:39:54  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-08T02:39:54.7117686Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-08T02:39:54.7118448Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-08T02:39:54.7119020Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-08T02:39:54.7119571Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-08T02:39:54.7120180Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-08T02:39:54.7120754Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-08T02:39:54.7121286Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-08T02:39:54.7121832Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-08T02:39:54.7122376Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-08T02:39:54.7123179Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-01-08T02:39:54.7123796Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-01-08T02:39:54.7124304Z Jan 08 02:39:54  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2022-01-08T02:39:54.7125001Z Jan 08 02:39:54  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2022-01-08T02:39:54.7125753Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-08T02:39:54.7126595Z Jan 08 02:39:54  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18718:
URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573


   
   ## CI report:
   
   * 8dbb8256563d48a7bdb13489d6670ae6215eebe1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31611)
 
   * 69d42893d3874932d77329e758818b743c528489 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31718)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18058: [FLINK-24571][connectors/elasticsearch] Supports a system time function(now() and current_timestamp) in index pattern

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18058:
URL: https://github.com/apache/flink/pull/18058#issuecomment-988949831


   
   ## CI report:
   
   * 93c33001cf55690369281de939bd79bb3727ad9a UNKNOWN
   * 47b985560b3efeb6b084c62e67ab56c2eb2cd7a9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31705)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25585) JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed on the azure

2022-02-16 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493730#comment-17493730
 ] 

Yun Gao commented on FLINK-25585:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31707=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=12446

> JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed 
> on the azure
> -
>
> Key: FLINK-25585
> URL: https://issues.apache.org/jira/browse/FLINK-25585
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.2
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-01-08T02:39:54.7103772Z Jan 08 02:39:54 [ERROR] Tests run: 2, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 349.282 s <<< FAILURE! - in 
> org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase
> 2022-01-08T02:39:54.7105233Z Jan 08 02:39:54 [ERROR] 
> testDispatcherProcessFailure[ExecutionMode BATCH]  Time elapsed: 302.006 s  
> <<< FAILURE!
> 2022-01-08T02:39:54.7106478Z Jan 08 02:39:54 java.lang.AssertionError: The 
> program encountered a RuntimeException : 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error 
> while waiting for job to be initialized
> 2022-01-08T02:39:54.7107409Z Jan 08 02:39:54  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-01-08T02:39:54.7108084Z Jan 08 02:39:54  at 
> org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure(JobManagerHAProcessFailureRecoveryITCase.java:383)
> 2022-01-08T02:39:54.7108952Z Jan 08 02:39:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-08T02:39:54.7109491Z Jan 08 02:39:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-08T02:39:54.7110107Z Jan 08 02:39:54  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-08T02:39:54.7110772Z Jan 08 02:39:54  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-08T02:39:54.7111594Z Jan 08 02:39:54  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-08T02:39:54.7112510Z Jan 08 02:39:54  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-08T02:39:54.7113734Z Jan 08 02:39:54  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-08T02:39:54.7114673Z Jan 08 02:39:54  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-08T02:39:54.7115423Z Jan 08 02:39:54  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-08T02:39:54.7116011Z Jan 08 02:39:54  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-01-08T02:39:54.7116586Z Jan 08 02:39:54  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-08T02:39:54.7117154Z Jan 08 02:39:54  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-08T02:39:54.7117686Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-08T02:39:54.7118448Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-08T02:39:54.7119020Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-08T02:39:54.7119571Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-08T02:39:54.7120180Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-08T02:39:54.7120754Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-08T02:39:54.7121286Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-08T02:39:54.7121832Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-08T02:39:54.7122376Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-08T02:39:54.7123179Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-01-08T02:39:54.7123796Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-01-08T02:39:54.7124304Z Jan 08 02:39:54  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2022-01-08T02:39:54.7125001Z Jan 08 02:39:54  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2022-01-08T02:39:54.7125753Z Jan 08 02:39:54  at 
> 

[jira] [Updated] (FLINK-25585) JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed on the azure

2022-02-16 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25585:

Affects Version/s: 1.15.0

> JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed 
> on the azure
> -
>
> Key: FLINK-25585
> URL: https://issues.apache.org/jira/browse/FLINK-25585
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.14.2
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-01-08T02:39:54.7103772Z Jan 08 02:39:54 [ERROR] Tests run: 2, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 349.282 s <<< FAILURE! - in 
> org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase
> 2022-01-08T02:39:54.7105233Z Jan 08 02:39:54 [ERROR] 
> testDispatcherProcessFailure[ExecutionMode BATCH]  Time elapsed: 302.006 s  
> <<< FAILURE!
> 2022-01-08T02:39:54.7106478Z Jan 08 02:39:54 java.lang.AssertionError: The 
> program encountered a RuntimeException : 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error 
> while waiting for job to be initialized
> 2022-01-08T02:39:54.7107409Z Jan 08 02:39:54  at 
> org.junit.Assert.fail(Assert.java:89)
> 2022-01-08T02:39:54.7108084Z Jan 08 02:39:54  at 
> org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure(JobManagerHAProcessFailureRecoveryITCase.java:383)
> 2022-01-08T02:39:54.7108952Z Jan 08 02:39:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-08T02:39:54.7109491Z Jan 08 02:39:54  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-08T02:39:54.7110107Z Jan 08 02:39:54  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-08T02:39:54.7110772Z Jan 08 02:39:54  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-08T02:39:54.7111594Z Jan 08 02:39:54  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-08T02:39:54.7112510Z Jan 08 02:39:54  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-08T02:39:54.7113734Z Jan 08 02:39:54  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-08T02:39:54.7114673Z Jan 08 02:39:54  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-08T02:39:54.7115423Z Jan 08 02:39:54  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-08T02:39:54.7116011Z Jan 08 02:39:54  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-01-08T02:39:54.7116586Z Jan 08 02:39:54  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-08T02:39:54.7117154Z Jan 08 02:39:54  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-08T02:39:54.7117686Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-08T02:39:54.7118448Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-08T02:39:54.7119020Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-08T02:39:54.7119571Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-08T02:39:54.7120180Z Jan 08 02:39:54  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-08T02:39:54.7120754Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-08T02:39:54.7121286Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-08T02:39:54.7121832Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-08T02:39:54.7122376Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-08T02:39:54.7123179Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-01-08T02:39:54.7123796Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-01-08T02:39:54.7124304Z Jan 08 02:39:54  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2022-01-08T02:39:54.7125001Z Jan 08 02:39:54  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2022-01-08T02:39:54.7125753Z Jan 08 02:39:54  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-08T02:39:54.7126595Z Jan 08 02:39:54  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18718:
URL: https://github.com/apache/flink/pull/18718#issuecomment-1035898573


   
   ## CI report:
   
   * 8dbb8256563d48a7bdb13489d6670ae6215eebe1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31611)
 
   * 69d42893d3874932d77329e758818b743c528489 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18345: [FLINK-25440][doc][pulsar] Both stopCursor and startCursor now uses publish time instead of event time

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18345:
URL: https://github.com/apache/flink/pull/18345#issuecomment-1011821419


   
   ## CI report:
   
   * c411c34c583a356d32abc7a2707161abb00ff929 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31708)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16419) Avoid to recommit transactions which are known committed successfully to Kafka upon recovery

2022-02-16 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493724#comment-17493724
 ] 

Martijn Visser commented on FLINK-16419:


[~qinjunjerry]It does, according to this comment 
https://issues.apache.org/jira/browse/FLINK-16419?focusedCommentId=17448680=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17448680
 

> Avoid to recommit transactions which are known committed successfully to 
> Kafka upon recovery
> 
>
> Key: FLINK-16419
> URL: https://issues.apache.org/jira/browse/FLINK-16419
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Runtime / Checkpointing
>Reporter: Jun Qin
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> usability
>
> When recovering from a snapshot (checkpoint/savepoint), FlinkKafkaProducer 
> tries to recommit all pre-committed transactions which are in the snapshot, 
> even if those transactions were successfully committed before (i.e., the call 
> to {{kafkaProducer.commitTransaction()}} via {{notifyCheckpointComplete()}} 
> returns OK). This may lead to recovery failures when recovering from a very 
> old snapshot because the transactional IDs in that snapshot may have been 
> expired and removed from Kafka.  For example the following scenario:
>  # Start a Flink job with FlinkKafkaProducer sink with exactly-once
>  # Suspend the Flink job with a savepoint A
>  # Wait for time longer than {{transactional.id.expiration.ms}} + 
> {{transaction.remove.expired.transaction.cleanup.interval.ms}}
>  # Recover the job with savepoint A.
>  # The recovery will fail with the following error:
> {noformat}
> 2020-02-26 14:33:25,817 INFO  
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer
>   - Attempting to resume transaction Source: Custom Source -> Sink: 
> Unnamed-7df19f87deec5680128845fd9a6ca18d-1 with producerId 2001 and epoch 
> 1202020-02-26 14:33:25,914 INFO  org.apache.kafka.clients.Metadata            
>                 - Cluster ID: RN0aqiOwTUmF5CnHv_IPxA
> 2020-02-26 14:33:26,017 INFO  org.apache.kafka.clients.producer.KafkaProducer 
>              - [Producer clientId=producer-1, transactionalId=Source: Custom 
> Source -> Sink: Unnamed-7df19f87deec5680128845fd9a6ca18d-1] Closing the Kafka 
> producer with timeoutMillis = 92233720
> 36854775807 ms.
> 2020-02-26 14:33:26,019 INFO  org.apache.flink.runtime.taskmanager.Task       
>              - Source: Custom Source -> Sink: Unnamed (1/1) 
> (a77e457941f09cd0ebbd7b982edc0f02) switched from RUNNING to FAILED.
> org.apache.kafka.common.KafkaException: Unhandled error in EndTxnResponse: 
> The producer attempted to use a producer id which is not currently assigned 
> to its transactional id.
>         at 
> org.apache.kafka.clients.producer.internals.TransactionManager$EndTxnHandler.handleResponse(TransactionManager.java:1191)
>         at 
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:909)
>         at 
> org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
>         at 
> org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557)
>         at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:288)
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235)
>         at java.lang.Thread.run(Thread.java:748)
> {noformat}
> For now, the workaround is to call 
> {{producer.ignoreFailuresAfterTransactionTimeout()}}. This is a bit risky, as 
> it may hide real transaction timeout errors. 
> After discussed with [~becket_qin], [~pnowojski] and [~aljoscha], a possible 
> way is to let JobManager, after successfully notifies all operators the 
> completion of a snapshot (via {{notifyCheckpoingComplete}}), record the 
> success, e.g., write the successful transactional IDs somewhere in the 
> snapshot. Then those transactions need not recommit upon recovery.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] MrWhiteSike commented on pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…

2022-02-16 Thread GitBox


MrWhiteSike commented on pull request #18718:
URL: https://github.com/apache/flink/pull/18718#issuecomment-1042656222


   [@RocMarshal](https://github.com/RocMarshal) Thanks for the suggestions and 
please review it again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr closed pull request #18809: [FLINK-26199] Annotate StatementSet#compilePlan as experimental.

2022-02-16 Thread GitBox


twalthr closed pull request #18809:
URL: https://github.com/apache/flink/pull/18809


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26165) SavepointFormatITCase fails on azure

2022-02-16 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493721#comment-17493721
 ] 

Yun Gao commented on FLINK-26165:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31702=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=5848

> SavepointFormatITCase fails on azure
> 
>
> Key: FLINK-26165
> URL: https://issues.apache.org/jira/browse/FLINK-26165
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31474=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=13116]
> {code}
> [ERROR] 
> org.apache.flink.test.checkpointing.SavepointFormatITCase.testTriggerSavepointAndResumeWithFileBasedCheckpointsAndRelocateBasePath(SavepointFormatType,
>  StateBackendConfig)[2]  Time elapsed: 14.209 s  <<< ERROR!
> java.util.concurrent.ExecutionException: java.io.IOException: Unknown 
> implementation of StreamStateHa ndle: class 
> org.apache.flink.runtime.state.PlaceholderStreamStateHandle
>    at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>    at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>    at 
> org.apache.flink.test.checkpointing.SavepointFormatITCase.submitJobAndTakeSavepoint(SavepointFormatITCase.java:328)
>    at 
> org.apache.flink.test.checkpointing.SavepointFormatITCase.testTriggerSavepointAndResumeWithFileBasedCheckpointsAndRelocateBasePath(SavepointFormatITCase.java:248)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>    at java.lang.reflect.Method.invoke(Method.java:498)
>    at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>    at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>    at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(Invo
>  cationInterceptorChain.java:131)
>    at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>    at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.ja
>  va:140)
>    at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtensio
>  n.java:92)
>    at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMet
>  hod$0(ExecutableInvoker.java:115)
>    at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105
>  )
>    at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(Inv
>  ocationInterceptorChain.java:106)
>    at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChai
>  n.java:64)
>    at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationIntercep
>  torChain.java:45)
>    at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain
>  .java:37)
>    at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>    at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>    at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMeth
>  odTestDescriptor.java:214)
>    at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.ja
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26199) StatementSet.compilePlan doesn't pass the archiunit tests

2022-02-16 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-26199.

  Assignee: David Morávek
Resolution: Fixed

Fixed in master: fc86f13659dec102bbb7cb4c2741f3d4c530

> StatementSet.compilePlan doesn't pass the archiunit tests
> -
>
> Key: FLINK-26199
> URL: https://issues.apache.org/jira/browse/FLINK-26199
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31678=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=26852
> {code}
> Feb 16 18:10:30 Architecture Violation [Priority: MEDIUM] - Rule 'Return and 
> argument types of methods annotated with @PublicEvolving must be annotated 
> with @Public(Evolving).' was violated (1 times):
> Feb 16 18:10:30 org.apache.flink.table.api.StatementSet.compilePlan(): 
> Returned leaf type org.apache.flink.table.api.CompiledPlan does not satisfy: 
> reside outside of package 'org.apache.flink..' or annotated with @Public or 
> annotated with @PublicEvolving or annotated with @Deprecated
> Feb 16 18:10:30   at 
> com.tngtech.archunit.lang.ArchRule$Assertions.assertNoViolation(ArchRule.java:94)
> Feb 16 18:10:30   at 
> com.tngtech.archunit.lang.ArchRule$Assertions.check(ArchRule.java:82)
> Feb 16 18:10:30   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.check(FreezingArchRule.java:96)
> Feb 16 18:10:30   at 
> com.tngtech.archunit.junit.ArchUnitTestDescriptor$ArchUnitRuleDescriptor.execute(ArchUnitTestDescriptor.java:159)
> Feb 16 18:10:30   at 
> com.tngtech.archunit.junit.ArchUnitTestDescriptor$ArchUnitRuleDescriptor.execute(ArchUnitTestDescriptor.java:142)
> Feb 16 18:10:30   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
> Feb 16 18:10:30   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> Feb 16 18:10:30   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> Feb 16 18:10:30   at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> Feb 16 18:10:30   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
> Feb 16 18:10:30   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> Feb 16 18:10:30   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> Feb 16 18:10:30   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> Feb 16 18:10:30   at java.util.ArrayList.forEach(ArrayList.java:1259)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1039977246


   
   ## CI report:
   
   * 90c1d1a155c2ebc29b92799b887b71e093a838bd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31711)
 
   * 7b021db85f9d61abd8e550d9dbaaf26d42b82e56 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31716)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] knaufk merged pull request #104: Add GPG key for 1.13.6 release

2022-02-16 Thread GitBox


knaufk merged pull request #104:
URL: https://github.com/apache/flink-docker/pull/104


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18363: [FLINK-25600][table-planner] Support new statement set syntax in sql client and update docs

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18363:
URL: https://github.com/apache/flink/pull/18363#issuecomment-1012985353


   
   ## CI report:
   
   * 710ff5f63fe97acf8e60fe509cad8d6b121d468c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31706)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1039977246


   
   ## CI report:
   
   * 90c1d1a155c2ebc29b92799b887b71e093a838bd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31711)
 
   * 7b021db85f9d61abd8e550d9dbaaf26d42b82e56 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] deadwind4 removed a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


deadwind4 removed a comment on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1042598533


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #20: [FLINK-26103] Introduce log store

2022-02-16 Thread GitBox


JingsongLi commented on a change in pull request #20:
URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808740283



##
File path: pom.xml
##
@@ -54,20 +54,23 @@ under the License.
 
 flink-table-store-core
 flink-table-store-connector
+flink-table-store-kafka
 
 
 
 1.15-SNAPSHOT
 2.12
 1.7.15
 2.17.1
+4.13.2

Review comment:
   Flink ITCase base class still use junit4




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] LadyForest commented on a change in pull request #20: [FLINK-26103] Introduce log store

2022-02-16 Thread GitBox


LadyForest commented on a change in pull request #20:
URL: https://github.com/apache/flink-table-store/pull/20#discussion_r808739591



##
File path: pom.xml
##
@@ -54,20 +54,23 @@ under the License.
 
 flink-table-store-core
 flink-table-store-connector
+flink-table-store-kafka
 
 
 
 1.15-SNAPSHOT
 2.12
 1.7.15
 2.17.1
+4.13.2

Review comment:
   Is there any legacy reason for us to both keep junit4 and junit5?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hanpf closed pull request #18811: My/release 1.14

2022-02-16 Thread GitBox


hanpf closed pull request #18811:
URL: https://github.com/apache/flink/pull/18811


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #19: [FLINK-26066] Introduce FileStoreRead

2022-02-16 Thread GitBox


JingsongLi commented on a change in pull request #19:
URL: https://github.com/apache/flink-table-store/pull/19#discussion_r808730677



##
File path: 
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/operation/FileStoreReadImpl.java
##
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.file.operation;
+
+import org.apache.flink.table.data.binary.BinaryRowData;
+import org.apache.flink.table.store.file.mergetree.MergeTreeReaderFactory;
+import org.apache.flink.table.store.file.mergetree.compact.IntervalPartition;
+import org.apache.flink.table.store.file.mergetree.sst.SstFileMeta;
+import org.apache.flink.table.store.file.utils.RecordReader;
+
+import java.io.IOException;
+import java.util.List;
+
+/** Default implementation of {@link FileStoreRead}. */
+public class FileStoreReadImpl implements FileStoreRead {
+
+private final MergeTreeReaderFactory mergeTreeReaderFactory;

Review comment:
   Do we really need `MergeTreeReaderFactory` and `MergeTreeWriterFactory` 
now? Now there is no `MergeTreeFactory` to be shared




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #19: [FLINK-26066] Introduce FileStoreRead

2022-02-16 Thread GitBox


JingsongLi commented on a change in pull request #19:
URL: https://github.com/apache/flink-table-store/pull/19#discussion_r808729514



##
File path: 
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/operation/FileStoreReadImpl.java
##
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.file.operation;
+
+import org.apache.flink.table.data.binary.BinaryRowData;
+import org.apache.flink.table.store.file.mergetree.MergeTreeReaderFactory;
+import org.apache.flink.table.store.file.mergetree.compact.IntervalPartition;
+import org.apache.flink.table.store.file.mergetree.sst.SstFileMeta;
+import org.apache.flink.table.store.file.utils.RecordReader;
+
+import java.io.IOException;
+import java.util.List;
+
+/** Default implementation of {@link FileStoreRead}. */
+public class FileStoreReadImpl implements FileStoreRead {
+
+private final MergeTreeReaderFactory mergeTreeReaderFactory;
+
+public FileStoreReadImpl(MergeTreeReaderFactory mergeTreeReaderFactory) {
+this.mergeTreeReaderFactory = mergeTreeReaderFactory;
+}
+
+@Override
+public void withKeyProjection(int[][] projectedFields) {
+// TODO

Review comment:
   throw exception here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hanpf commented on pull request #18811: My/release 1.14

2022-02-16 Thread GitBox


hanpf commented on pull request #18811:
URL: https://github.com/apache/flink/pull/18811#issuecomment-1042634671


   > 
   sorry
   faulty operation,please ignore
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26142) Integrate flink-kubernetes-operator repo with CI/CD

2022-02-16 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493704#comment-17493704
 ] 

Gyula Fora commented on FLINK-26142:


I agree that the docker stuff is difficult to answer straight away, lets break 
it up into another ticket

> Integrate flink-kubernetes-operator repo with CI/CD
> ---
>
> Key: FLINK-26142
> URL: https://issues.apache.org/jira/browse/FLINK-26142
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Yang Wang
>Priority: Major
>
> We should be able to run the unit/integration tests on the existing Flink CI 
> infra and publish docker images for new builds to dockerhub



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18797: [FLINK-26180] Update docs to introduce the compaction for FileSink.

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18797:
URL: https://github.com/apache/flink/pull/18797#issuecomment-1041240844


   
   ## CI report:
   
   * 716d8eb657299bd7243957443128ae3935cb3e31 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31622)
 
   * 270a217fe153986cc08b9e0b2cc9a1327e20b7cf Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31715)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18811: My/release 1.14

2022-02-16 Thread GitBox


flinkbot commented on pull request #18811:
URL: https://github.com/apache/flink/pull/18811#issuecomment-1042630458


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cce57a3e31067bdeb9ddb8f41b350b763840dbba (Thu Feb 17 
06:55:37 UTC 2022)
   
   **Warnings:**
* **4 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18797: [FLINK-26180] Update docs to introduce the compaction for FileSink.

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18797:
URL: https://github.com/apache/flink/pull/18797#issuecomment-1041240844


   
   ## CI report:
   
   * 716d8eb657299bd7243957443128ae3935cb3e31 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31622)
 
   * 270a217fe153986cc08b9e0b2cc9a1327e20b7cf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18772: [FLINK-25992][Runtime/Coordination, Tests] For stability, wait until the latest restored checkpoint is not null to avoid race condit

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18772:
URL: https://github.com/apache/flink/pull/18772#issuecomment-1040071363


   
   ## CI report:
   
   * 45f15c4bae06b4c40e16ffc9ae1ebe0a72527363 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31703)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pltbkd commented on a change in pull request #18797: [FLINK-26180] Update docs to introduce the compaction for FileSink.

2022-02-16 Thread GitBox


pltbkd commented on a change in pull request #18797:
URL: https://github.com/apache/flink/pull/18797#discussion_r808722805



##
File path: docs/content/docs/connectors/datastream/filesystem.md
##
@@ -958,6 +958,73 @@ val sink = FileSink
 {{< /tab >}}
 {{< /tabs >}}
 
+### Compaction
+
+Since version 1.15 `FileSink` supports compaction of the `pending` files,
+which allows the application to have smaller checkpoint interval without 
generating a lot of small files,
+especially when using the [bulk encoded formats]({{< ref 
"docs/connectors/datastream/filesystem#bulk-encoded-formats" >}})
+that have to rolling on taking checkpoints.
+
+Compaction could be enabled with
+
+{{< tabs "enablecompaction" >}}
+{{< tab "Java" >}}
+```java
+
+FileSink fileSink=
+   FileSink.forRowFormat(new Path(path),new SimpleStringEncoder())
+   .enableCompact(
+   FileCompactStrategy.Builder.newBuilder()
+   .setSizeThreshold(1024)
+   .enableCompactionOnCheckpoint(5)
+   .build(),
+   new RecordWiseFileCompactor<>(
+   new DecoderBasedReader.Factory<>(SimpleStringDecoder::new)))
+   .build();
+
+```
+{{< /tab >}}
+{{< tab "Scala" >}}
+```scala
+
+val fileSink: FileSink[Integer] =
+  FileSink.forRowFormat(new Path(path), new SimpleStringEncoder[Integer]())
+.enableCompact(
+  FileCompactStrategy.Builder.newBuilder()
+.setSizeThreshold(1024)
+.enableCompactionOnCheckpoint(5)
+.build(),
+  new RecordWiseFileCompactor(
+new DecoderBasedReader.Factory(() => new SimpleStringDecoder)))
+.build()
+
+```
+{{< /tab >}}
+{{< /tabs >}}
+
+Once enabled, the compaction happens between the files become `pending` and 
get committed. The pending files will
+be first committed to temporary files whose path starts with `.`. Then these 
files will be compacted according to
+the strategy by the compactor specified by the users, and the new compacted 
pending files will be generated.
+Then these pending files will be emitted to the committer to be committed to 
the formal files. After that, the source files will be removed.
+
+When enabling compaction, you need to specify the {{< javadoc 
file="org/apache/flink/connector/file/sink/compactor/FileCompactStrategy.html" 
name="FileCompactStrategy">}}
+and the {{< javadoc 
file="org/apache/flink/connector/file/sink/compactor/FileCompactor.html" 
name="FileCompactor">}}.
+
+The {{< javadoc 
file="org/apache/flink/connector/file/sink/compactor/FileCompactStrategy.html" 
name="FileCompactStrategy">}} specifies
+when and which files get compacted. Currently, there are two parallel 
conditions: the target file size and the number of checkpoints get passed.
+Once the total size of the cached files has reached the size threshold or the 
number of checkpoints since the last compaction has reached the specified 
number, 
+the cached files will be scheduled to compact.
+
+The {{< javadoc 
file="org/apache/flink/connector/file/sink/compactor/FileCompactor.html" 
name="FileCompactor">}} specifies how to compact
+the give list of `Path` and write the result to {{< javadoc 
file="org/apache/flink/connector/file/sink/filesystem/CompactingFileWriter.html"
 name="CompactingFileWriter">}}. It could be classified into two types 
according to the type of the give `CompactingFileWriter`:

Review comment:
   Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18811: My/release 1.14

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18811:
URL: https://github.com/apache/flink/pull/18811#issuecomment-1042626396


   
   ## CI report:
   
   * cce57a3e31067bdeb9ddb8f41b350b763840dbba Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31714)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26135) Separate job and deployment errors in FlinkDeployment status

2022-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26135:
---
Labels: pull-request-available  (was: )

> Separate job and deployment errors in FlinkDeployment status
> 
>
> Key: FLINK-26135
> URL: https://issues.apache.org/jira/browse/FLINK-26135
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> At the moment the controller does not validate or tolerate any deployment 
> errors such as incorrect configurations etc. Those will lead to an exception 
> loop in the reconcile logic.
> There are cases where the job deployment cannot be executed due to incorrect 
> configuration or other causes. In these cases the job can still be running 
> correctly so the job status should be OK but we should signal a deployment 
> error to the user that requires action.
> There should be a shared validation logic between the controller and the 
> webhook that should be applied whenever a new FlinkDeployment update is 
> received by the controller. If an error is detected in the controller, set 
> the deployment status to error with a useful message and leave the current 
> job running.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #18811: My/release 1.14

2022-02-16 Thread GitBox


flinkbot commented on pull request #18811:
URL: https://github.com/apache/flink/pull/18811#issuecomment-1042626396


   
   ## CI report:
   
   * cce57a3e31067bdeb9ddb8f41b350b763840dbba UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hanpf opened a new pull request #18811: My/release 1.14

2022-02-16 Thread GitBox


hanpf opened a new pull request #18811:
URL: https://github.com/apache/flink/pull/18811


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26209) Possibility of Command Injection attack

2022-02-16 Thread Iman Sharafaldin (Jira)
Iman Sharafaldin created FLINK-26209:


 Summary: Possibility of Command Injection attack 
 Key: FLINK-26209
 URL: https://issues.apache.org/jira/browse/FLINK-26209
 Project: Flink
  Issue Type: Bug
  Components: Library / Machine Learning
Reporter: Iman Sharafaldin


As you can see in line 134 command line is built using string concatenation. An 
attacker who has control over args can execute malicious commands.

 

|final String cmd = discoveryScript.getAbsolutePath() + " " + gpuAmount + " " + 
args;|
||

[https://github.com/apache/flink/blob/0d29b23f892714e4936b8af2f896e3040ddc9e89/flink-external-resources/flink-external-resource-gpu/src/main/java/org/apache/flink/externalresource/gpu/GPUDriver.java#L134]

 

 

Reference:

https://owasp.org/www-community/attacks/Command_Injection



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24677) JdbcBatchingOutputFormat should not generate circulate chaining of exceptions when flushing fails in timer thread

2022-02-16 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-24677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493696#comment-17493696
 ] 

张健 commented on FLINK-24677:


OK, thanks, I will do it next week. When PR is ready, I will ping you in 
github.:D

> JdbcBatchingOutputFormat should not generate circulate chaining of exceptions 
> when flushing fails in timer thread
> -
>
> Key: FLINK-24677
> URL: https://issues.apache.org/jira/browse/FLINK-24677
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.15.0
>Reporter: Caizhi Weng
>Priority: Major
>
> This is reported from the [user mailing 
> list|https://lists.apache.org/thread.html/r3e725f52e4f325b9dcb790635cc642bd6018c4bca39f86c71b8a60f4%40%3Cuser.flink.apache.org%3E].
> In the timer thread created in {{JdbcBatchingOutputFormat#open}}, 
> {{flushException}} field will be recorded if the call to {{flush}} throws an 
> exception. This exception is used to fail the job in the main thread.
> However {{JdbcBatchingOutputFormat#flush}} will also check for this exception 
> and will wrap it with a new layer of runtime exception. This will cause a 
> super long stack when the main thread finally discover the exception and 
> fails.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] MrWhiteSike commented on a change in pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…

2022-02-16 Thread GitBox


MrWhiteSike commented on a change in pull request #18718:
URL: https://github.com/apache/flink/pull/18718#discussion_r808717415



##
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##
@@ -811,61 +809,63 @@ input.sinkTo(sink)
 {{< /tab >}}
 {{< /tabs >}}
 
-The `SequenceFileWriterFactory` supports additional constructor parameters to 
specify compression settings.
+`SequenceFileWriterFactory` 提供额外的构造参数去设置是否开启压缩功能。
+
+
+
+### 桶分配
 
-### Bucket Assignment
+桶的逻辑定义了如何将数据分配到基本输出目录内的子目录中。
 
-The bucketing logic defines how the data will be structured into 
subdirectories inside the base output directory.
+Row-encoded Format 和 Bulk-encoded Format (参考 [Format 
Types](#sink-format-types)) 使用了 `DateTimeBucketAssigner` 作为默认的分配器。
+默认的分配器 `DateTimeBucketAssigner` 会基于使用了格式为 `-MM-dd--HH` 的系统默认时区来创建小时桶。日期格式( 
*即* 桶大小)和时区都可以手动配置。
 
-Both row and bulk formats (see [File Formats](#file-formats)) use the 
`DateTimeBucketAssigner` as the default assigner.
-By default the `DateTimeBucketAssigner` creates hourly buckets based on the 
system default timezone
-with the following format: `-MM-dd--HH`. Both the date format (*i.e.* 
bucket size) and timezone can be
-configured manually.
+我们可以在格式化构造器中通过调用 `.withBucketAssigner(assigner)` 方法去指定自定义的 `BucketAssigner`。
 
-We can specify a custom `BucketAssigner` by calling 
`.withBucketAssigner(assigner)` on the format builders.
+Flink 内置了两种 BucketAssigners:
 
-Flink comes with two built-in BucketAssigners:
+- `DateTimeBucketAssigner` : 默认的基于时间的分配器
+- `BasePathBucketAssigner` : 分配所有文件存储在基础路径上(单个全局桶)
 
-- `DateTimeBucketAssigner` : Default time based assigner
-- `BasePathBucketAssigner` : Assigner that stores all part files in the base 
path (single global bucket)
+
 
-### Rolling Policy
+### 滚动策略
 
-The `RollingPolicy` defines when a given in-progress part file will be closed 
and moved to the pending and later to finished state.
-Part files in the "finished" state are the ones that are ready for viewing and 
are guaranteed to contain valid data that will not be reverted in case of 
failure.
-In `STREAMING` mode, the Rolling Policy in combination with the checkpointing 
interval (pending files become finished on the next checkpoint) control how 
quickly
-part files become available for downstream readers and also the size and 
number of these parts. In `BATCH` mode, part-files become visible at the end of 
the job but
-the rolling policy can control their maximum size.
+`RollingPolicy` 定义了何时关闭给定的进行中的文件,并将其转换为挂起状态,然后在转换为完成状态。
+完成状态的文件,可供查看并且可以保证数据的有效性,在出现故障时不会恢复。
+在 `STREAMING` 模式下,滚动策略结合 Checkpoint 间隔(到下一个 Checkpoint 完成时挂起状态的文件变成完成状态)共同控制 
Part 文件对下游 readers 是否可见以及这些文件的大小和数量。在 `BATCH` 模式下,Part 文件在 Job 
最后对下游才变得可见,滚动策略只控制最大的 Part 文件大小。
 
-Flink comes with two built-in RollingPolicies:
+Flink 内置了两种 RollingPolicies:
 
 - `DefaultRollingPolicy`
 - `OnCheckpointRollingPolicy`
 
-### Part file lifecycle
+
 
-In order to use the output of the `FileSink` in downstream systems, we need to 
understand the naming and lifecycle of the output files produced.
+### Part 文件生命周期
 
-Part files can be in one of three states:
-1. **In-progress** : The part file that is currently being written to is 
in-progress
-2. **Pending** : Closed (due to the specified rolling policy) in-progress 
files that are waiting to be committed
-3. **Finished** : On successful checkpoints (`STREAMING`) or at the end of 
input (`BATCH`) pending files transition to "Finished"
+为了在下游使用 `FileSink` 作为输出,我们需要了解生成的输出文件的命名和生命周期。
 
-Only finished files are safe to read by downstream systems as those are 
guaranteed to not be modified later.
+Part 文件可以处于以下三种状态中的任意一种:
+1. **In-progress** :当前正在被写入的 Part 文件处于 in-progress 状态
+2. **Pending** : (由于指定的滚动策略)关闭 in-progress 状态的文件,并且等待提交 
+3. **Finished** : 流模式(`STREAMING`)下的成功的 Checkpoint 
或者批模式(`BATCH`)下输入结束,挂起状态文件转换为完成状态
 
-Each writer subtask will have a single in-progress part file at any given time 
for every active bucket, but there can be several pending and finished files.
+只有完成状态下的文件被下游读取时才是安全的,并且保证不会被修改。
 
-**Part file example**
+对于每个活动的桶,在任何给定时间每个写入 Subtask 中都有一个正在进行的 Part 文件,但可能有多个挂起和完成的文件。
 
-To better understand the lifecycle of these files let's look at a simple 
example with 2 sink subtasks:
+**Part 文件示例**

Review comment:
   No, it isn't a link tag.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-26208) Introduce implementation of ManagedTableFactory

2022-02-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-26208:


Assignee: Jane Chan

> Introduce implementation of ManagedTableFactory
> ---
>
> Key: FLINK-26208
> URL: https://issues.apache.org/jira/browse/FLINK-26208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>
> Introduce impl for 
> `org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
> #onCreateTable, #onDropTable and #onCompactTable) to support interaction with 
> Flink's TableEnv



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-20830) Add a type of HEADLESS_CLUSTER_IP for rest service type

2022-02-16 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang closed FLINK-20830.
-
Resolution: Fixed

> Add a type of HEADLESS_CLUSTER_IP for rest service type
> ---
>
> Key: FLINK-20830
> URL: https://issues.apache.org/jira/browse/FLINK-20830
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Aitozi
>Assignee: Aitozi
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.15.0, 1.14.4
>
>
> Now we can choose ClusterIP or NodePort or LoadBalancer as rest service type. 
> But in our internal kubernetes cluster, there is no kube-proxy, and ClusterIP 
> mode rely on kube-proxy. So I think can we support another type of 
> HEADLESS_CLUSTER_IP to directly talk to jobmanager pod? cc [~fly_in_gis]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-25877) Update the copyright year in NOTICE files

2022-02-16 Thread nyingping (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490909#comment-17490909
 ] 

nyingping edited comment on FLINK-25877 at 2/17/22, 6:28 AM:
-

Could someone to assign it to me,happy  to fix it.
thanks in advance.


was (Author: JIRAUSER282677):
Can someone assign it to me,happy  to fix it.
thanks in advance.

> Update the copyright year in NOTICE files
> -
>
> Key: FLINK-25877
> URL: https://issues.apache.org/jira/browse/FLINK-25877
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: nyingping
>Priority: Minor
>  Labels: pull-request-available
>
> * The current copyright year is 2014-2021 in NOTICE files. We should change 
> it to 2014-2022.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-20830) Add a type of HEADLESS_CLUSTER_IP for rest service type

2022-02-16 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493687#comment-17493687
 ] 

Yang Wang commented on FLINK-20830:
---

Fixed via:

master(release-1.15): 1ae09ae6942eaaf5a6197c68fc12ee4e9fc1a105

release-1.14: 68f0688e38a43e99417c26b630395e234f605aff

> Add a type of HEADLESS_CLUSTER_IP for rest service type
> ---
>
> Key: FLINK-20830
> URL: https://issues.apache.org/jira/browse/FLINK-20830
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Aitozi
>Assignee: Aitozi
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.15.0
>
>
> Now we can choose ClusterIP or NodePort or LoadBalancer as rest service type. 
> But in our internal kubernetes cluster, there is no kube-proxy, and ClusterIP 
> mode rely on kube-proxy. So I think can we support another type of 
> HEADLESS_CLUSTER_IP to directly talk to jobmanager pod? cc [~fly_in_gis]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-26086) fixed some causing ambiguities including the 'shit' comment

2022-02-16 Thread nyingping (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490908#comment-17490908
 ] 

nyingping edited comment on FLINK-26086 at 2/17/22, 6:28 AM:
-

Could someone to assign it to me,happy  to fix it.
thanks in advance.


was (Author: JIRAUSER282677):
Can someone assign it to me,happy  to fix it.
thanks in advance.

> fixed some causing ambiguities including the 'shit' comment
> ---
>
> Key: FLINK-26086
> URL: https://issues.apache.org/jira/browse/FLINK-26086
> Project: Flink
>  Issue Type: Improvement
>Reporter: nyingping
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-02-11-16-38-12-674.png
>
>
> such as 
> `@param shiftTimeZone the shit timezone of the window`
>  !image-2022-02-11-16-38-12-674.png! 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20830) Add a type of HEADLESS_CLUSTER_IP for rest service type

2022-02-16 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-20830:
--
Fix Version/s: 1.14.4

> Add a type of HEADLESS_CLUSTER_IP for rest service type
> ---
>
> Key: FLINK-20830
> URL: https://issues.apache.org/jira/browse/FLINK-20830
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Aitozi
>Assignee: Aitozi
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.15.0, 1.14.4
>
>
> Now we can choose ClusterIP or NodePort or LoadBalancer as rest service type. 
> But in our internal kubernetes cluster, there is no kube-proxy, and ClusterIP 
> mode rely on kube-proxy. So I think can we support another type of 
> HEADLESS_CLUSTER_IP to directly talk to jobmanager pod? cc [~fly_in_gis]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26208) Introduce implementation of ManagedTableFactory

2022-02-16 Thread Jane Chan (Jira)
Jane Chan created FLINK-26208:
-

 Summary: Introduce implementation of ManagedTableFactory
 Key: FLINK-26208
 URL: https://issues.apache.org/jira/browse/FLINK-26208
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Jane Chan


Introduce impl for 
`org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
#onCreateTable, #onDropTable and #onCompactTable) to support interaction with 
Flink's TableEnv



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] wangyang0918 merged pull request #18767: [FLINK-20830][BP 1.14][k8s] Add type of Headless_Cluster_IP for external rest …

2022-02-16 Thread GitBox


wangyang0918 merged pull request #18767:
URL: https://github.com/apache/flink/pull/18767


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18810: [FLINK-25289][hotfix] use the normal jar in flink-end-to-end-tests instead of a separate one

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18810:
URL: https://github.com/apache/flink/pull/18810#issuecomment-1042612648


   
   ## CI report:
   
   * 5ec04ad6ed6a32bb3e6b7167fbacda8511f12b48 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31712)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18785: [FLINK-26167][table-planner] Explicitly set the partitioner for the sql operators whose shuffle and sort are removed

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18785:
URL: https://github.com/apache/flink/pull/18785#issuecomment-1040524147


   
   ## CI report:
   
   * 15dc3494e8378bd67fd2ca859eeb317d88d45c95 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31646)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26207) refactor web ui's components to depend on module's injected config object

2022-02-16 Thread Junhan Yang (Jira)
Junhan Yang created FLINK-26207:
---

 Summary: refactor web ui's components to depend on module's 
injected config object
 Key: FLINK-26207
 URL: https://issues.apache.org/jira/browse/FLINK-26207
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Web Frontend
Reporter: Junhan Yang


By making components depend on injected config object, one is able to config 
corresponding module's view, styles and/or business logic.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #18810: [FLINK-25289][hotfix] use the normal jar in flink-end-to-end-tests instead of a separate one

2022-02-16 Thread GitBox


flinkbot commented on pull request #18810:
URL: https://github.com/apache/flink/pull/18810#issuecomment-1042613234


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5ec04ad6ed6a32bb3e6b7167fbacda8511f12b48 (Thu Feb 17 
06:20:49 UTC 2022)
   
   **Warnings:**
* **2 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on pull request #18785: [FLINK-26167][table-planner] Explicitly set the partitioner for the sql operators whose shuffle and sort are removed

2022-02-16 Thread GitBox


godfreyhe commented on pull request #18785:
URL: https://github.com/apache/flink/pull/18785#issuecomment-1042612877


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18810: [FLINK-25289][hotfix] use the normal jar in flink-end-to-end-tests instead of a separate one

2022-02-16 Thread GitBox


flinkbot commented on pull request #18810:
URL: https://github.com/apache/flink/pull/18810#issuecomment-1042612648


   
   ## CI report:
   
   * 5ec04ad6ed6a32bb3e6b7167fbacda8511f12b48 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] (FLINK-26201) FileStoreScanTest is not stable

2022-02-16 Thread Jane Chan (Jira)


[ https://issues.apache.org/jira/browse/FLINK-26201 ]


Jane Chan deleted comment on FLINK-26201:
---

was (Author: qingyue):
The reason is due to the dependency of flink-avro is not updated.

> FileStoreScanTest is not stable
> ---
>
> Key: FLINK-26201
> URL: https://issues.apache.org/jira/browse/FLINK-26201
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> FileStoreScanTest#testWithSnapshot and FileStoreScanTest#testWithManifestList 
> are randomly failed with the following stacktrace.
>  
> h3. How to reproduce
> You can reproduce this issue either by `mvn clean package/install` or run the 
> individual test in IDE.
> h3. Details
> h6. FileStoreScanTest#testWithSnapshot
>  
> {code:java}
> java.lang.IllegalStateException: Trying to add file 
> {org.apache.flink.table.data.binary.BinaryRowData@ddd21057, 5, 0, 
> sst-b60934a5-eb23-4e2d-9b07-155d2ef29e15-0} which is already added. Manifest 
> might be corrupted.    at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.scan(FileStoreScanImpl.java:184)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:145)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreWriteImpl.createWriter(FileStoreWriteImpl.java:59)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.lambda$writeAndCommitData$1(OperationTestUtils.java:174)
>     at java.base/java.util.HashMap.compute(HashMap.java:1228)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.writeAndCommitData(OperationTestUtils.java:169)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.writeData(FileStoreScanTest.java:202)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.testWithSnapshot(FileStoreScanTest.java:135)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>     at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> 

[GitHub] [flink] ruanhang1993 opened a new pull request #18810: [FLINK-25289][hotfix] use the normal jar instead of a separate one

2022-02-16 Thread GitBox


ruanhang1993 opened a new pull request #18810:
URL: https://github.com/apache/flink/pull/18810


   ## What is the purpose of the change
   
   This pull request remove building a separate jar for the 
`flink-connector-test-utils` module.
   
   ## Brief change log
   
 - Remove building a separate jar for `flink-connector-test-utils` module
 - `flink-end-to-end-tests` uses the normal jar
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper:no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26206) refactor web ui's service layer and app config

2022-02-16 Thread Junhan Yang (Jira)
Junhan Yang created FLINK-26206:
---

 Summary: refactor web ui's service layer and app config
 Key: FLINK-26206
 URL: https://issues.apache.org/jira/browse/FLINK-26206
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Web Frontend
Reporter: Junhan Yang


Add abstractions for api service layer and refactor app.config.ts to a config 
service.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26201) FileStoreScanTest is not stable

2022-02-16 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-26201.
-
Resolution: Not A Bug

The reason is due to the dependency of flink-avro is not updated.

> FileStoreScanTest is not stable
> ---
>
> Key: FLINK-26201
> URL: https://issues.apache.org/jira/browse/FLINK-26201
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> FileStoreScanTest#testWithSnapshot and FileStoreScanTest#testWithManifestList 
> are randomly failed with the following stacktrace.
>  
> h3. How to reproduce
> You can reproduce this issue either by `mvn clean package/install` or run the 
> individual test in IDE.
> h3. Details
> h6. FileStoreScanTest#testWithSnapshot
>  
> {code:java}
> java.lang.IllegalStateException: Trying to add file 
> {org.apache.flink.table.data.binary.BinaryRowData@ddd21057, 5, 0, 
> sst-b60934a5-eb23-4e2d-9b07-155d2ef29e15-0} which is already added. Manifest 
> might be corrupted.    at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.scan(FileStoreScanImpl.java:184)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:145)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreWriteImpl.createWriter(FileStoreWriteImpl.java:59)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.lambda$writeAndCommitData$1(OperationTestUtils.java:174)
>     at java.base/java.util.HashMap.compute(HashMap.java:1228)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.writeAndCommitData(OperationTestUtils.java:169)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.writeData(FileStoreScanTest.java:202)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.testWithSnapshot(FileStoreScanTest.java:135)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>     at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>     at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1039977246


   
   ## CI report:
   
   * 90c1d1a155c2ebc29b92799b887b71e093a838bd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31711)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26201) FileStoreScanTest is not stable

2022-02-16 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493678#comment-17493678
 ] 

Jane Chan commented on FLINK-26201:
---

The reason is due to the dependency of flink-avro is not being updated

> FileStoreScanTest is not stable
> ---
>
> Key: FLINK-26201
> URL: https://issues.apache.org/jira/browse/FLINK-26201
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> FileStoreScanTest#testWithSnapshot and FileStoreScanTest#testWithManifestList 
> are randomly failed with the following stacktrace.
>  
> h3. How to reproduce
> You can reproduce this issue either by `mvn clean package/install` or run the 
> individual test in IDE.
> h3. Details
> h6. FileStoreScanTest#testWithSnapshot
>  
> {code:java}
> java.lang.IllegalStateException: Trying to add file 
> {org.apache.flink.table.data.binary.BinaryRowData@ddd21057, 5, 0, 
> sst-b60934a5-eb23-4e2d-9b07-155d2ef29e15-0} which is already added. Manifest 
> might be corrupted.    at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.scan(FileStoreScanImpl.java:184)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:145)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreWriteImpl.createWriter(FileStoreWriteImpl.java:59)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.lambda$writeAndCommitData$1(OperationTestUtils.java:174)
>     at java.base/java.util.HashMap.compute(HashMap.java:1228)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.writeAndCommitData(OperationTestUtils.java:169)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.writeData(FileStoreScanTest.java:202)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.testWithSnapshot(FileStoreScanTest.java:135)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>     at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>     at 
> 

[GitHub] [flink-ml] lindong28 merged pull request #58: [FLINK-26100][docs] Set up Flink ML Document Website

2022-02-16 Thread GitBox


lindong28 merged pull request #58:
URL: https://github.com/apache/flink-ml/pull/58


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on pull request #58: [FLINK-26100][docs] Set up Flink ML Document Website

2022-02-16 Thread GitBox


lindong28 commented on pull request #58:
URL: https://github.com/apache/flink-ml/pull/58#issuecomment-1042606550


   Thanks for the PR. LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18799: [FLINK-26145][docs] fix a kubernetes image that does not exist in the…

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18799:
URL: https://github.com/apache/flink/pull/18799#issuecomment-1041334465


   
   ## CI report:
   
   * 73640cb14e6a68cacc27b281f8d5508a5ffa21ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31641)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18799: [FLINK-26145][docs] fix a kubernetes image that does not exist in the…

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18799:
URL: https://github.com/apache/flink/pull/18799#issuecomment-1041334465


   
   ## CI report:
   
   * 73640cb14e6a68cacc27b281f8d5508a5ffa21ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31641)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] liuyongvs commented on pull request #18799: [FLINK-26145][docs] fix a kubernetes image that does not exist in the…

2022-02-16 Thread GitBox


liuyongvs commented on pull request #18799:
URL: https://github.com/apache/flink/pull/18799#issuecomment-1042601380


   @flinkbot run azure re-run


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1039977246


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 90c1d1a155c2ebc29b92799b887b71e093a838bd Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31711)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26196) error when Incremental Checkpoints by RocksDb

2022-02-16 Thread Yue Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493664#comment-17493664
 ] 

Yue Ma commented on FLINK-26196:


I think you might need to provide more information to figure out where this 
exception is happening. For example, the logs before the exceptions in this 
task etc.

> error when Incremental Checkpoints  by RocksDb 
> ---
>
> Key: FLINK-26196
> URL: https://issues.apache.org/jira/browse/FLINK-26196
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.13.2
>Reporter: hjw
>Priority: Critical
>
> When I use Incremental Checkpoints by RocksDb , errors happen occasionally. 
> Fortunately,Flink job is running normally
> Log:
> {code:java}
> java.io.IOException: Could not perform checkpoint 2804 for operator 
> cc-rule-keyByAndReduceStream (2/8)#1.
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:1045)
>  at 
> org.apache.flink.streaming.runtime.io.checkpointing.CheckpointBarrierHandler.notifyCheckpoint(CheckpointBarrierHandler.java:135)
>  at 
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.triggerCheckpoint(SingleCheckpointBarrierHandler.java:250)
>  at 
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.access$100(SingleCheckpointBarrierHandler.java:61)
>  at 
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler$ControllerImpl.triggerGlobalCheckpoint(SingleCheckpointBarrierHandler.java:431)
>  at 
> org.apache.flink.streaming.runtime.io.checkpointing.AbstractAlignedBarrierHandlerState.barrierReceived(AbstractAlignedBarrierHandlerState.java:61)
>  at 
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.processBarrier(SingleCheckpointBarrierHandler.java:227)
>  at 
> org.apache.flink.streaming.runtime.io.checkpointing.CheckpointedInputGate.handleEvent(CheckpointedInputGate.java:180)
>  at 
> org.apache.flink.streaming.runtime.io.checkpointing.CheckpointedInputGate.pollNext(CheckpointedInputGate.java:158)
>  at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:110)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:66)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:423)
>  at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:204)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:681)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.executeInvoke(StreamTask.java:636)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:620)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Could not 
> complete snapshot 2804 for operator cc-rule-keyByAndReduceStream (2/8)#1. 
> Failure reason: Checkpoint was declined.
>  at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:264)
>  at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.snapshotState(StreamOperatorStateHandler.java:169)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:371)
>  at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointStreamOperator(SubtaskCheckpointCoordinatorImpl.java:706)
>  at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.buildOperatorSnapshotFutures(SubtaskCheckpointCoordinatorImpl.java:627)
>  at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.takeSnapshotSync(SubtaskCheckpointCoordinatorImpl.java:590)
>  at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointState(SubtaskCheckpointCoordinatorImpl.java:312)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$performCheckpoint$8(StreamTask.java:1089)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:1073)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:1029)
>  ... 19 more
> 

[GitHub] [flink] flinkbot edited a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1039977246


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 90c1d1a155c2ebc29b92799b887b71e093a838bd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] deadwind4 commented on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


deadwind4 commented on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1042598533


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] deadwind4 removed a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


deadwind4 removed a comment on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1042493920


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * 2830d54980890a81ca8ad4a3e3c27bebc6448bca Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30471)
 
   * 9b492093f225a63b04fd8de96adeffec551bf31c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31710)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18303: [FLINK-25085][runtime] Add a scheduled thread pool for periodic tasks in RpcEndpoint

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18303:
URL: https://github.com/apache/flink/pull/18303#issuecomment-1008005239


   
   ## CI report:
   
   * 2830d54980890a81ca8ad4a3e3c27bebc6448bca Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30471)
 
   * 9b492093f225a63b04fd8de96adeffec551bf31c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25876) Implement overwrite in FileStoreScanImpl

2022-02-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-25876.

Resolution: Fixed

master: 44dfecd56695d9575e75b0b63a49aa8211ecde9f

> Implement overwrite in FileStoreScanImpl
> 
>
> Key: FLINK-25876
> URL: https://issues.apache.org/jira/browse/FLINK-25876
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
>
> Overwrite is a useful transaction for batch jobs to completely update a 
> partition for data correction. Currently FileStoreScanImpl doesn't implement 
> this transaction so we need to implement that.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25876) Implement overwrite in FileStoreScanImpl

2022-02-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-25876:
-
Component/s: Table Store

> Implement overwrite in FileStoreScanImpl
> 
>
> Key: FLINK-25876
> URL: https://issues.apache.org/jira/browse/FLINK-25876
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> Overwrite is a useful transaction for batch jobs to completely update a 
> partition for data correction. Currently FileStoreScanImpl doesn't implement 
> this transaction so we need to implement that.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25876) Implement overwrite in FileStoreScanImpl

2022-02-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-25876:
-
Fix Version/s: table-store-0.1.0

> Implement overwrite in FileStoreScanImpl
> 
>
> Key: FLINK-25876
> URL: https://issues.apache.org/jira/browse/FLINK-25876
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> Overwrite is a useful transaction for batch jobs to completely update a 
> partition for data correction. Currently FileStoreScanImpl doesn't implement 
> this transaction so we need to implement that.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] JingsongLi merged pull request #16: [FLINK-25876] Implement overwrite in FlinkStoreCommitImpl

2022-02-16 Thread GitBox


JingsongLi merged pull request #16:
URL: https://github.com/apache/flink-table-store/pull/16


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #18718: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chi…

2022-02-16 Thread GitBox


RocMarshal commented on a change in pull request #18718:
URL: https://github.com/apache/flink/pull/18718#discussion_r808681911



##
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##
@@ -377,22 +378,23 @@ Flink comes with four built-in BulkWriter factories:
 * OrcBulkWriterFactory
 
 {{< hint info >}}
-**Important** Bulk Formats can only have a rolling policy that extends the 
`CheckpointRollingPolicy`.
-The latter rolls on every checkpoint. A policy can roll additionally based on 
size or processing time.
+**重要** Bulk-encoded Format 仅支持一种继承了 `CheckpointRollingPolicy` 类的滚动策略。
+在每个 Checkpoint 都会滚动。另外也可以根据大小或处理时间进行滚动。
 {{< /hint >}}
 
-# Parquet format
+
+
+# Parquet Format
 
-Flink contains built in convenience methods for creating Parquet writer 
factories for Avro data. These methods
-and their associated documentation can be found in the AvroParquetWriters 
class.
+Flink 包含了为 Avro Format 数据创建 Parquet 写入工厂的内置便利方法。在 AvroParquetWriters 
类中可以发现那些方法以及相关的使用说明。

Review comment:
   nit: 
   Flink 包含了为 Avro Format 数据创建 Parquet 写入工厂的内置便利方法->   Flink 内置了为 Avro 
Format 数据创建 Parquet 写入工厂的快捷方法
   a minor comment.
   

##
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##
@@ -232,71 +232,72 @@ new HiveSource<>(
 {{< /tab >}}
 {{< /tabs >}}
 
-### Current Limitations
+
+
+### 当前限制
+
+对于大量积压的文件,Watermark 效果不佳。这是因为 Watermark 急于在一个文件中推进,而下一个文件可能包含比 Watermark 更晚的数据。
 
-Watermarking does not work very well for large backlogs of files. This is 
because watermarks eagerly advance within a file, and the next file might 
contain data later than the watermark.
+对于无界 File Sources,枚举器会会将当前所有已处理文件的路径记录到 state 中,在某些情况下,这可能会导致状态变得相当大。
+未来计划将引入一种压缩的方式来跟踪已经处理的文件(例如,将修改时间戳保持在边界以下)。
 
-For Unbounded File Sources, the enumerator currently remembers paths of all 
already processed files, which is a state that can, in some cases, grow rather 
large.
-There are plans to add a compressed form of tracking already processed files 
in the future (for example, by keeping modification timestamps below 
boundaries).
+
 
-### Behind the Scenes
+### 后记
 {{< hint info >}}
-If you are interested in how File Source works through the new data source API 
design, you may
-want to read this part as a reference. For details about the new data source 
API, check out the
-[documentation on data sources]({{< ref "docs/dev/datastream/sources.md" >}}) 
and
+如果你对新设计的数据源 API 中的 File Sources 是如何工作的感兴趣,可以阅读本部分作为参考。关于新的数据源 API 的更多细节,请参考
+[documentation on data sources]({{< ref "docs/dev/datastream/sources.md" >}}) 
和在
 https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface;>FLIP-27
-for more descriptive discussions.
+中获取更加具体的讨论详情。
 {{< /hint >}}
 
+
+
 ## File Sink
 
-The file sink writes incoming data into buckets. Given that the incoming 
streams can be unbounded,
-data in each bucket is organized into part files of finite size. The bucketing 
behaviour is fully configurable
-with a default time-based bucketing where we start writing a new bucket every 
hour. This means that each resulting
-bucket will contain files with records received during 1 hour intervals from 
the stream.
+File Sink 将传入的数据写入存储桶中。考虑到输入流可以是无界的,每个桶中的数据被组织成有限大小的 Part 文件。
+完全可以配置为基于时间的方式往桶中写入数据,比如我们可以设置每个小时的数据写入一个新桶中。这意味着桶中将包含一个小时间隔内接收到的记录。
 
-Data within the bucket directories is split into part files. Each bucket will 
contain at least one part file for
-each subtask of the sink that has received data for that bucket. Additional 
part files will be created according to the configurable
-rolling policy. For `Row-encoded Formats` (see [File Formats](#file-formats)) 
the default policy rolls part files based
-on size, a timeout that specifies the maximum duration for which a file can be 
open, and a maximum inactivity
-timeout after which the file is closed. For `Bulk-encoded Formats` we roll on 
every checkpoint and the user can
-specify additional conditions based on size or time.
+桶目录中的数据被拆分成多个 Part 文件。对于相应的接收数据的桶的 Sink 的每个 Subtask ,每个桶将至少包含一个 Part 
文件。将根据配置的滚动策略来创建其他 Part 文件。
+对于 `Row-encoded Formats`(参考 [Format Types](#sink-format-types))默认的策略是根据 Part 
文件大小进行滚动,需要指定文件打开状态最长时间的超时以及文件关闭后的非活动状态的超时时间。
+对于 `Bulk-encoded Formats` 我们在每次创建 Checkpoint 时进行滚动,并且用户也可以添加基于大小或者时间等的其他条件。
 
 {{< hint info >}}
 
-**IMPORTANT**: Checkpointing needs to be enabled when using the `FileSink` in 
`STREAMING` mode. Part files
-can only be finalized on successful checkpoints. If checkpointing is disabled, 
part files will forever stay
-in the `in-progress` or the `pending` state, and cannot be safely read by 
downstream systems.
+**重要**: 在 `STREAMING` 模式下使用 `FileSink` 需要开启 Checkpoint 功能。
+文件只在 Checkpoint 成功时生成。如果没有开启 Checkpoint 功能,文件将永远停留在 `in-progress` 或者 `pending` 
的状态,并且下游系统将不能安全读取该文件数据。
 
 {{< /hint >}}
 
 {{< img src="/fig/streamfilesink_bucketing.png"  width="100%" >}}
 
+
+
 ### Format Types
 
-The `FileSink` supports both row-wise and bulk encoding formats, such as 
[Apache 

[GitHub] [flink] flinkbot edited a comment on pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18769:
URL: https://github.com/apache/flink/pull/18769#issuecomment-1039977246


   
   ## CI report:
   
   * cf8a3915328570f8497893b3608900e46c85f69d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31655)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #16: [FLINK-25876] Implement overwrite in FlinkStoreCommitImpl

2022-02-16 Thread GitBox


JingsongLi commented on a change in pull request #16:
URL: https://github.com/apache/flink-table-store/pull/16#discussion_r808688217



##
File path: 
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/manifest/ManifestCommittable.java
##
@@ -27,26 +27,26 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Objects;
+import java.util.UUID;
 
 /** Manifest commit message. */
 public class ManifestCommittable {
 
+private final String uuid;
 private final Map>> newFiles;
-
 private final Map>> 
compactBefore;
-
 private final Map>> 
compactAfter;
 
 public ManifestCommittable() {
-this.newFiles = new HashMap<>();
-this.compactBefore = new HashMap<>();
-this.compactAfter = new HashMap<>();
+this(UUID.randomUUID().toString(), new HashMap<>(), new HashMap<>(), 
new HashMap<>());
 }
 
 public ManifestCommittable(
+String uuid,

Review comment:
   Can we use checkpoint id here? Just like 
https://github.com/apache/flink-table-store/pull/22




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24855) Source Coordinator Thread already exists. There should never be more than one thread driving the actions of a Source Coordinator.

2022-02-16 Thread WangMinChao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

WangMinChao closed FLINK-24855.
---
Resolution: Feedback Received

> Source Coordinator Thread already exists. There should never be more than one 
> thread driving the actions of a Source Coordinator.
> -
>
> Key: FLINK-24855
> URL: https://issues.apache.org/jira/browse/FLINK-24855
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Runtime / Coordination
>Affects Versions: 1.13.3
> Environment: flink-cdc 2.1
>Reporter: WangMinChao
>Priority: Critical
> Attachments: image-2022-01-12-09-23-04-210.png
>
>
>  
> When I am synchronizing large tables, have the following problems :
> 2021-11-09 20:33:04,222 INFO 
> com.ververica.cdc.connectors.mysql.source.enumerator.MySqlSourceEnumerator [] 
> - Assign split MySqlSnapshotSplit\{tableId=db.table, splitId='db.table:383', 
> splitKeyType=[`id` BIGINT NOT NULL], splitStart=[9798290], 
> splitEnd=[9823873], highWatermark=null} to subtask 1
> 2021-11-09 20:33:04,248 INFO 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering 
> checkpoint 101 (type=CHECKPOINT) @ 1636461183945 for job 
> 3cee105643cfee78b80cd0a41143b5c1.
> 2021-11-09 20:33:10,734 ERROR 
> org.apache.flink.runtime.util.FatalExitExceptionHandler [] - FATAL: Thread 
> 'SourceCoordinator-Source: mysqlcdc-source -> Sink: kafka-sink' produced an 
> uncaught exception. Stopping the process...
> java.lang.Error: Source Coordinator Thread already exists. There should never 
> be more than one thread driving the actions of a Source Coordinator. Existing 
> Thread: Thread[SourceCoordinator-Source: mysqlcdc-source -> Sink: 
> kafka-sink,5,main]
> at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProvider$CoordinatorExecutorThreadFactory.newThread(SourceCoordinatorProvider.java:119)
>  [flink-dist_2.12-1.13.3.jar:1.13.3]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1025)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_191]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26168) The judgment of chainable ignores StreamExchangeMode when partitioner is ForwardForConsecutiveHashPartitioner

2022-02-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-26168:

Component/s: Runtime / Coordination

> The judgment of chainable ignores StreamExchangeMode when partitioner is 
> ForwardForConsecutiveHashPartitioner
> -
>
> Key: FLINK-26168
> URL: https://issues.apache.org/jira/browse/FLINK-26168
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Lijie Wang
>Assignee: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> After FLINK-26167, if the exchange mode is set as {{ALL_EDGES_BLOCKING}} 
> through {{{}table.exec.shuffle-mode{}}}, the shuffle mode of  
> {{ForwardForConsecutiveHashPartitioner}} will be set as {{BATCH}} (becase it 
> may be converted to hash shuffle). But it should not affect the chain 
> creation, so we need chang the chain logic and let the judgment of chainable 
> ignores StreamExchangeMode when partitioner is 
> ForwardForConsecutiveHashPartitioner.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24855) Source Coordinator Thread already exists. There should never be more than one thread driving the actions of a Source Coordinator.

2022-02-16 Thread WangMinChao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493653#comment-17493653
 ] 

WangMinChao commented on FLINK-24855:
-

cdc issue trace: https://github.com/ververica/flink-cdc-connectors/issues/865

> Source Coordinator Thread already exists. There should never be more than one 
> thread driving the actions of a Source Coordinator.
> -
>
> Key: FLINK-24855
> URL: https://issues.apache.org/jira/browse/FLINK-24855
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Runtime / Coordination
>Affects Versions: 1.13.3
> Environment: flink-cdc 2.1
>Reporter: WangMinChao
>Priority: Critical
> Attachments: image-2022-01-12-09-23-04-210.png
>
>
>  
> When I am synchronizing large tables, have the following problems :
> 2021-11-09 20:33:04,222 INFO 
> com.ververica.cdc.connectors.mysql.source.enumerator.MySqlSourceEnumerator [] 
> - Assign split MySqlSnapshotSplit\{tableId=db.table, splitId='db.table:383', 
> splitKeyType=[`id` BIGINT NOT NULL], splitStart=[9798290], 
> splitEnd=[9823873], highWatermark=null} to subtask 1
> 2021-11-09 20:33:04,248 INFO 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering 
> checkpoint 101 (type=CHECKPOINT) @ 1636461183945 for job 
> 3cee105643cfee78b80cd0a41143b5c1.
> 2021-11-09 20:33:10,734 ERROR 
> org.apache.flink.runtime.util.FatalExitExceptionHandler [] - FATAL: Thread 
> 'SourceCoordinator-Source: mysqlcdc-source -> Sink: kafka-sink' produced an 
> uncaught exception. Stopping the process...
> java.lang.Error: Source Coordinator Thread already exists. There should never 
> be more than one thread driving the actions of a Source Coordinator. Existing 
> Thread: Thread[SourceCoordinator-Source: mysqlcdc-source -> Sink: 
> kafka-sink,5,main]
> at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProvider$CoordinatorExecutorThreadFactory.newThread(SourceCoordinatorProvider.java:119)
>  [flink-dist_2.12-1.13.3.jar:1.13.3]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1025)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_191]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26168) The judgment of chainable ignores StreamExchangeMode when partitioner is ForwardForConsecutiveHashPartitioner

2022-02-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-26168.
---
Resolution: Done

master/release-1.15:
8f3a0251d195ed2532abc31e602029dfcbf7bc77

> The judgment of chainable ignores StreamExchangeMode when partitioner is 
> ForwardForConsecutiveHashPartitioner
> -
>
> Key: FLINK-26168
> URL: https://issues.apache.org/jira/browse/FLINK-26168
> Project: Flink
>  Issue Type: Bug
>Reporter: Lijie Wang
>Assignee: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> After FLINK-26167, if the exchange mode is set as {{ALL_EDGES_BLOCKING}} 
> through {{{}table.exec.shuffle-mode{}}}, the shuffle mode of  
> {{ForwardForConsecutiveHashPartitioner}} will be set as {{BATCH}} (becase it 
> may be converted to hash shuffle). But it should not affect the chain 
> creation, so we need chang the chain logic and let the judgment of chainable 
> ignores StreamExchangeMode when partitioner is 
> ForwardForConsecutiveHashPartitioner.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zhuzhurk closed pull request #18789: [FLINK-26168][runtime] The judgment of chainable ignores StreamExchangeMode when partitioner is ForwardForConsecutiveHashPartitioner

2022-02-16 Thread GitBox


zhuzhurk closed pull request #18789:
URL: https://github.com/apache/flink/pull/18789


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on pull request #18789: [FLINK-26168][runtime] The judgment of chainable ignores StreamExchangeMode when partitioner is ForwardForConsecutiveHashPartitioner

2022-02-16 Thread GitBox


zhuzhurk commented on pull request #18789:
URL: https://github.com/apache/flink/pull/18789#issuecomment-1042583153


   The failed case is a known issue and unrelated. Merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal edited a comment on pull request #18480: [FLINK-25789][docs-zh] Translate the formats/hadoop page into Chinese.

2022-02-16 Thread GitBox


RocMarshal edited a comment on pull request #18480:
URL: https://github.com/apache/flink/pull/18480#issuecomment-1037264660


   @leonardBang @wuchong Could you help me to merge it if there's nothing 
Inappropriate?  Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on pull request #18789: [FLINK-26168][runtime] The judgment of chainable ignores StreamExchangeMode when partitioner is ForwardForConsecutiveHashPartitioner

2022-02-16 Thread GitBox


wanglijie95 commented on pull request #18789:
URL: https://github.com/apache/flink/pull/18789#issuecomment-1042564457


   The CI always fails due to FLINK-26199


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25325) Migration Flink from Junit4 to Junit5

2022-02-16 Thread Aiden Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493633#comment-17493633
 ] 

Aiden Gong commented on FLINK-25325:


Hi,[~jingge] [~renqs] .We should confirm powermock working on Junit5 correctly 
, due to the upstream [issue|https://github.com/powermock/powermock/issues/929].

> Migration Flink from Junit4 to Junit5
> -
>
> Key: FLINK-25325
> URL: https://issues.apache.org/jira/browse/FLINK-25325
> Project: Flink
>  Issue Type: New Feature
>  Components: Tests
>Affects Versions: 1.14.0
>Reporter: Jing Ge
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.15.0
>
>
> Based on the consensus from the mailing list discussion[1][2], we have been 
> starting working on the JUnit4 to JUnit5 migration. 
> This is the umbrella ticket which describes the big picture of the migration 
> with following steps:
>  * AssertJ integration and guideline
>  * Test Framework upgrade from JUnit4 to JUnit5
>  * JUnit5 migration guideline(document and reference migration)
>  * Optimization for issues found while writing new test in JUint5
>  * [Long-term]Module based graceful migration of old tests in JUnit4 to JUnit5
>  
> All JUnit5 migration related tasks are welcome to be created under this 
> umbrella.
>  
> [1] [[DISCUSS]Moving to 
> JUnit5|https://lists.apache.org/thread/jsjvc2cqb91pyh47d4p6olk3c1vxqm3w]
> [2] [[DISCUSS] Conventions on assertions to use in 
> tests|https://lists.apache.org/thread/33t7hz8w873p1bc5msppk65792z08rgt]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26205) Support Online Model Save in FlinkML

2022-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26205:
---
Labels: pull-request-available  (was: )

> Support Online Model Save in FlinkML
> 
>
> Key: FLINK-26205
> URL: https://issues.apache.org/jira/browse/FLINK-26205
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Reporter: weibo zhao
>Priority: Major
>  Labels: pull-request-available
>
> Support Online Model Save in FlinkML.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] weibozhao opened a new pull request #60: [FLINK-26205] Support Online Model Save in FlinkML

2022-02-16 Thread GitBox


weibozhao opened a new pull request #60:
URL: https://github.com/apache/flink-ml/pull/60


   ## What is the purpose of the change
   Support Online Model Save in FlinkML.
   
   ## Brief change log
   Add code of online model save.
   
   ## Does this pull request potentially affect one of the following parts:
   Dependencies (does it add or upgrade a dependency): (no)
   The public API, i.e., is any changed class annotated with @public(Evolving): 
(no)
   Does this pull request introduce a new feature? (no)
   If yes, how is the feature documented? (Java doc)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26205) Support Online Model Save in FlinkML

2022-02-16 Thread weibo zhao (Jira)
weibo zhao created FLINK-26205:
--

 Summary: Support Online Model Save in FlinkML
 Key: FLINK-26205
 URL: https://issues.apache.org/jira/browse/FLINK-26205
 Project: Flink
  Issue Type: New Feature
  Components: Library / Machine Learning
Reporter: weibo zhao


Support Online Model Save in FlinkML.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26142) Integrate flink-kubernetes-operator repo with CI/CD

2022-02-16 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493629#comment-17493629
 ] 

Yang Wang commented on FLINK-26142:
---

I am afraid we still need some discussions about how and when to release the 
docker images. For example, do we need to create an official docker repo on the 
docker hub? Or we simply use the github image repo. Do we need a nightly image 
or every commit on master will trigger an image build?

So this ticket will focus on the CI part and I will create a new subtask for 
continuous delivery via docker images.

> Integrate flink-kubernetes-operator repo with CI/CD
> ---
>
> Key: FLINK-26142
> URL: https://issues.apache.org/jira/browse/FLINK-26142
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Yang Wang
>Priority: Major
>
> We should be able to run the unit/integration tests on the existing Flink CI 
> infra and publish docker images for new builds to dockerhub



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18789: [FLINK-26168][runtime] The judgment of chainable ignores StreamExchangeMode when partitioner is ForwardForConsecutiveHashPartitioner

2022-02-16 Thread GitBox


flinkbot edited a comment on pull request #18789:
URL: https://github.com/apache/flink/pull/18789#issuecomment-1040944258


   
   ## CI report:
   
   * ff43c28e2c5bd3a227eb6f826d7c1e5dea729479 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31682)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26204) set table.optimizer.distinct-agg.split.enabled to true and using Window TVF CUMULATE to count users, It will appear that the value of the current step window time is gre

2022-02-16 Thread Bo Huang (Jira)
Bo Huang created FLINK-26204:


 Summary: set table.optimizer.distinct-agg.split.enabled to true 
and using Window TVF CUMULATE to count users, It will appear that the value of 
the current step window time is greater than the value of the next step window 
time
 Key: FLINK-26204
 URL: https://issues.apache.org/jira/browse/FLINK-26204
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.14.3, 1.14.0
Reporter: Bo Huang
 Attachments: TestApp.java, test.png

set table.optimizer.distinct-agg.split.enabled to true

using Window TVF CUMULATE to count users

It will appear that the value of the current step window time is greater than 
the value of the next step window time



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] lindong28 commented on pull request #57: [FLINK-26100][docs] Set up Flink ML Document Website

2022-02-16 Thread GitBox


lindong28 commented on pull request #57:
URL: https://github.com/apache/flink-ml/pull/57#issuecomment-1042536822


   Here are some links that related to setting up the Flink ML website and 
determine its build health:
   -  Wiki for setting up Flink nightly website 
[link](https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation)
   - Repo to create/update the scripts which pull docs from Flink ML 
[link](https://github.com/apache/infrastructure-bb2)
   - Flink ML website build job history 
[link](https://ci2.apache.org/#/builders/103)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 merged pull request #59: [hotfix] Fix version and branch info of docs in release 2.0

2022-02-16 Thread GitBox


lindong28 merged pull request #59:
URL: https://github.com/apache/flink-ml/pull/59


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii closed pull request #18797: [FLINK-26180] Update docs to introduce the compaction for FileSink.

2022-02-16 Thread GitBox


gaoyunhaii closed pull request #18797:
URL: https://github.com/apache/flink/pull/18797


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #18797: [FLINK-26180] Update docs to introduce the compaction for FileSink.

2022-02-16 Thread GitBox


gaoyunhaii commented on a change in pull request #18797:
URL: https://github.com/apache/flink/pull/18797#discussion_r808637597



##
File path: docs/content/docs/connectors/datastream/filesystem.md
##
@@ -958,6 +958,73 @@ val sink = FileSink
 {{< /tab >}}
 {{< /tabs >}}
 
+### Compaction
+
+Since version 1.15 `FileSink` supports compaction of the `pending` files,
+which allows the application to have smaller checkpoint interval without 
generating a lot of small files,
+especially when using the [bulk encoded formats]({{< ref 
"docs/connectors/datastream/filesystem#bulk-encoded-formats" >}})
+that have to rolling on taking checkpoints.
+
+Compaction could be enabled with
+
+{{< tabs "enablecompaction" >}}
+{{< tab "Java" >}}
+```java
+
+FileSink fileSink=
+   FileSink.forRowFormat(new Path(path),new SimpleStringEncoder())
+   .enableCompact(
+   FileCompactStrategy.Builder.newBuilder()
+   .setSizeThreshold(1024)
+   .enableCompactionOnCheckpoint(5)
+   .build(),
+   new RecordWiseFileCompactor<>(
+   new DecoderBasedReader.Factory<>(SimpleStringDecoder::new)))
+   .build();
+
+```
+{{< /tab >}}
+{{< tab "Scala" >}}
+```scala
+
+val fileSink: FileSink[Integer] =
+  FileSink.forRowFormat(new Path(path), new SimpleStringEncoder[Integer]())
+.enableCompact(
+  FileCompactStrategy.Builder.newBuilder()
+.setSizeThreshold(1024)
+.enableCompactionOnCheckpoint(5)
+.build(),
+  new RecordWiseFileCompactor(
+new DecoderBasedReader.Factory(() => new SimpleStringDecoder)))
+.build()
+
+```
+{{< /tab >}}
+{{< /tabs >}}
+
+Once enabled, the compaction happens between the files become `pending` and 
get committed. The pending files will
+be first committed to temporary files whose path starts with `.`. Then these 
files will be compacted according to
+the strategy by the compactor specified by the users, and the new compacted 
pending files will be generated.
+Then these pending files will be emitted to the committer to be committed to 
the formal files. After that, the source files will be removed.
+
+When enabling compaction, you need to specify the {{< javadoc 
file="org/apache/flink/connector/file/sink/compactor/FileCompactStrategy.html" 
name="FileCompactStrategy">}}
+and the {{< javadoc 
file="org/apache/flink/connector/file/sink/compactor/FileCompactor.html" 
name="FileCompactor">}}.
+
+The {{< javadoc 
file="org/apache/flink/connector/file/sink/compactor/FileCompactStrategy.html" 
name="FileCompactStrategy">}} specifies
+when and which files get compacted. Currently, there are two parallel 
conditions: the target file size and the number of checkpoints get passed.
+Once the total size of the cached files has reached the size threshold or the 
number of checkpoints since the last compaction has reached the specified 
number, 
+the cached files will be scheduled to compact.
+
+The {{< javadoc 
file="org/apache/flink/connector/file/sink/compactor/FileCompactor.html" 
name="FileCompactor">}} specifies how to compact
+the give list of `Path` and write the result to {{< javadoc 
file="org/apache/flink/connector/file/sink/filesystem/CompactingFileWriter.html"
 name="CompactingFileWriter">}}. It could be classified into two types 
according to the type of the give `CompactingFileWriter`:

Review comment:
   The link to CompactingFileWriter seems not right. Also the same with the 
Chinese version




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24677) JdbcBatchingOutputFormat should not generate circulate chaining of exceptions when flushing fails in timer thread

2022-02-16 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493622#comment-17493622
 ] 

Caizhi Weng commented on FLINK-24677:
-

[~2011aad] thanks for your interest in contributing! Sure you can. You can also 
ping me (@tsreaper) in github for the review when your PR is ready.

> JdbcBatchingOutputFormat should not generate circulate chaining of exceptions 
> when flushing fails in timer thread
> -
>
> Key: FLINK-24677
> URL: https://issues.apache.org/jira/browse/FLINK-24677
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.15.0
>Reporter: Caizhi Weng
>Priority: Major
>
> This is reported from the [user mailing 
> list|https://lists.apache.org/thread.html/r3e725f52e4f325b9dcb790635cc642bd6018c4bca39f86c71b8a60f4%40%3Cuser.flink.apache.org%3E].
> In the timer thread created in {{JdbcBatchingOutputFormat#open}}, 
> {{flushException}} field will be recorded if the call to {{flush}} throws an 
> exception. This exception is used to fail the job in the main thread.
> However {{JdbcBatchingOutputFormat#flush}} will also check for this exception 
> and will wrap it with a new layer of runtime exception. This will cause a 
> super long stack when the main thread finally discover the exception and 
> fails.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-25409) Add cache metric to LookupFunction

2022-02-16 Thread Yuan Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493621#comment-17493621
 ] 

Yuan Zhu edited comment on FLINK-25409 at 2/17/22, 3:08 AM:


Hi, all. Sorry for my late reply. As discussion before, I make a brief design: 

[https://docs.google.com/document/d/1L2eo7VABZBdRxoRP_wPvVwuvTZOV9qrN9gEQxjhSJOc/edit?usp=sharing]

Shall I propose a discussion in mail?


was (Author: straw):
Hi, all. Sorry for my late reply. As discussion before, I make a brief design: 

[https://docs.google.com/document/d/1L2eo7VABZBdRxoRP_wPvVwuvTZOV9qrN9gEQxjhSJOc/edit?usp=sharing]

> Add cache metric to LookupFunction
> --
>
> Key: FLINK-25409
> URL: https://issues.apache.org/jira/browse/FLINK-25409
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Ecosystem
>Reporter: Yuan Zhu
>Priority: Major
>
> Since we encounter performance problem when lookup join in production env 
> frequently, adding metrics to monitor Lookup function cache is very helpful 
> to troubleshoot.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   5   6   7   8   9   >