[jira] [Updated] (FLINK-32866) Unify the TestLoggerExtension config of junit5

2023-08-18 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-32866:

Summary: Unify the TestLoggerExtension config of junit5  (was: Clean up the 
`@ExtendWith(TestLoggerExtension.class)` for modules that added the 
`TestLoggerExtension` to the `org.junit.jupiter.api.extension.Extension 
resource` file)

> Unify the TestLoggerExtension config of junit5
> --
>
> Key: FLINK-32866
> URL: https://issues.apache.org/jira/browse/FLINK-32866
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Minor
>  Labels: pull-request-available, starter
>
> Some modules added the {{TestLoggerExtension}} to the 
> {{org.junit.jupiter.api.extension.Extension}} resource file. All test classes 
> of these modules don't need add the 
> {{@ExtendWith(TestLoggerExtension.class)}}  at class level.
> This JIRA propose clean up the {{@ExtendWith(TestLoggerExtension.class)}}  
> for modules that added the {{TestLoggerExtension}} to the 
> {{org.junit.jupiter.api.extension.Extension}} resource file.
> Update: We could also investigate in what extend we could remove the 
> {{org.junit.jupiter.api.extension.Extension}} resource files from certain 
> modules. It only has to be present once on the classpath. A single location 
> for this configuration would be nice (e.g. putting it in 
> {{flink-test-utils-parent/flink-test-utils-junit}}). But I think not all 
> modules have this one as there dependency. So, having a single resource file 
> for configuring this might be not possible. But at least, we can reduce the 
> number of files to a minimum.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32895) Introduce the max attempts for Exponential Delay Restart Strategy

2023-08-18 Thread Rui Fan (Jira)
Rui Fan created FLINK-32895:
---

 Summary: Introduce the max attempts for Exponential Delay Restart 
Strategy
 Key: FLINK-32895
 URL: https://issues.apache.org/jira/browse/FLINK-32895
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Reporter: Rui Fan
Assignee: Rui Fan


Currently, Flink has 3 restart strategies, they are: fixed-delay, failure-rate 
and exponential-delay.

The exponential-delay is suitable if a job continues to fail for a period of 
time. The fixed-delay and failure-rate has the max attemepts mechanism, that 
means, the job won't restart and fail after the attemept exceeds the threshold 
of max attemepts. 

The max attemepts mechanism is reasonable, flink should not or need to 
infinitely restart the job if the job keeps failing. However, the 
exponential-delay doesn't have the max attemepts mechanism.

I propose inctroducing the 
`restart-strategy.exponential-delay.max-attemepts-before-reset` to support the 
max attemepts mechanism for exponential-delay. It means flink won't restart job 
if the number of job failures before reset exceeds max-attepts-before-reset 
when is exponential-delay is enabled.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32661) OperationRelatedITCase.testOperationRelatedApis fails on AZP

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32661:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> OperationRelatedITCase.testOperationRelatedApis fails on AZP
> 
>
> Key: FLINK-32661
> URL: https://issues.apache.org/jira/browse/FLINK-32661
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51452=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12114
> fails as 
> {noformat}
> Jul 20 04:23:49 org.opentest4j.AssertionFailedError: 
> Jul 20 04:23:49 
> Jul 20 04:23:49 Expecting actual's toString() to return:
> Jul 20 04:23:49   "PENDING"
> Jul 20 04:23:49 but was:
> Jul 20 04:23:49   "RUNNING"
> Jul 20 04:23:49   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jul 20 04:23:49   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jul 20 04:23:49   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jul 20 04:23:49   at 
> org.apache.flink.table.gateway.rest.OperationRelatedITCase.testOperationRelatedApis(OperationRelatedITCase.java:91)
> Jul 20 04:23:49   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 20 04:23:49   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 20 04:23:49   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 20 04:23:49   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 20 04:23:49   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
> Jul 20 04:23:49   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:21
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31493) helm upgrade does not work, because repo path does not follow helm standards

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31493:
---
  Labels: auto-deprioritized-major helm  (was: helm stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> helm upgrade does not work, because repo path does not follow helm standards
> 
>
> Key: FLINK-31493
> URL: https://issues.apache.org/jira/browse/FLINK-31493
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Emmanuel Leroy
>Priority: Minor
>  Labels: auto-deprioritized-major, helm
>
> the helm repo for flink-operator is a folder that includes the version, which 
> is not following the helm chart repo standards.
> In a standard helm repo, the repo URL is the name of the product (without 
> version) and then the folder includes the different versions of the chart.
> This is an issue because the repo itself needs to be installed every time the 
> version is upgraded, as opposed to adding the repo once and then upgrading 
> the version.
> When attempting to add the latest repo, helm will complain that the repo 
> already exists. It is necessary to first remove the repo, and then add the 
> updated one.
> When trying to upgrade the chart, it doesn't work, because helm expects the 
> chart of the previous version to be in the same repo, but it cannot be found 
> in the newly added repo.
> So the chart needs to be uninstalled, then the new one installed.
> The solution is to use a common path for all versions of the chart, and 
> maintain a manifest with the various versions (instead of different folders 
> with different manifests)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31526) Various test failures in PyFlink

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31526:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Various test failures in PyFlink
> 
>
> Key: FLINK-31526
> URL: https://issues.apache.org/jira/browse/FLINK-31526
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.1
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47328=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=37186
> {code}
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkBatchUserDefinedFunctionTests::test_udf_in_join_condition_2
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkBatchUserDefinedFunctionTests::test_udf_with_constant_params
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkBatchUserDefinedFunctionTests::test_udf_without_arguments
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_all_data_types
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_all_data_types_expression
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_chaining_scalar_function
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_create_and_drop_function
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_open
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_overwrite_builtin_function
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_scalar_function
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_udf_in_join_condition
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_udf_in_join_condition_2
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_udf_with_constant_params
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udf.py::PyFlinkEmbeddedThreadTests::test_udf_without_arguments
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udtf.py::PyFlinkStreamUserDefinedFunctionTests::test_execute_from_json_plan
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udtf.py::PyFlinkStreamUserDefinedFunctionTests::test_table_function
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udtf.py::PyFlinkStreamUserDefinedFunctionTests::test_table_function_with_sql_query
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udtf.py::PyFlinkBatchUserDefinedFunctionTests::test_table_function
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udtf.py::PyFlinkBatchUserDefinedFunctionTests::test_table_function_with_sql_query
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udtf.py::PyFlinkEmbeddedThreadTests::test_table_function
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_udtf.py::PyFlinkEmbeddedThreadTests::test_table_function_with_sql_query
> Mar 19 04:08:57 ERROR 
> pyflink/table/tests/test_window.py::StreamTableWindowTests::test_over_window
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32483) RescaleCheckpointManuallyITCase.testCheckpointRescalingOutKeyedState fails on AZP

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32483:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> RescaleCheckpointManuallyITCase.testCheckpointRescalingOutKeyedState fails on 
> AZP
> -
>
> Key: FLINK-32483
> URL: https://issues.apache.org/jira/browse/FLINK-32483
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Affects Versions: 1.17.2
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50397=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=7495
>  fails with
> {noformat}
> Jun 26 06:08:57 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 21.041 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase
> Jun 26 06:08:57 [ERROR] 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingOutKeyedState
>   Time elapsed: 6.435 s  <<< FAILURE!
> Jun 26 06:08:57 java.lang.AssertionError: expected:<[(0,24000), (2,58500), 
> (0,34500), (0,45000), (3,43500), (2,18000), (1,6000), (1,16500), (0,28500), 
> (0,52500), (3,27000), (1,51000), (2,25500), (0,1500), (0,49500), (3,0), 
> (3,48000), (0,36000), (2,22500), (1,10500), (0,46500), (2,33000), (1,21000), 
> (0,9000), (0,57000), (3,31500), (2,19500), (1,7500), (1,55500), (3,42000), 
> (2,3), (0,54000), (2,40500), (1,4500), (3,15000), (2,3000), (1,39000), 
> (2,13500), (0,37500), (0,61500), (3,12000), (3,6)]> but was:<[(2,58500), 
> (0,34500), (0,45000), (3,43500), (2,18000), (1,16500), (0,52500), (3,27000), 
> (2,25500), (0,49500), (3,0), (3,48000), (0,36000), (2,22500), (1,21000), 
> (0,9000), (0,57000), (3,31500), (1,7500), (2,3), (0,54000), (2,40500), 
> (1,4500), (2,3000), (1,39000), (2,13500), (0,61500), (3,12000)]>
> Jun 26 06:08:57   at org.junit.Assert.fail(Assert.java:89)
> Jun 26 06:08:57   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jun 26 06:08:57   at org.junit.Assert.assertEquals(Assert.java:120)
> Jun 26 06:08:57   at org.junit.Assert.assertEquals(Assert.java:146)
> Jun 26 06:08:57   at 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.restoreAndAssert(RescaleCheckpointManuallyITCase.java:219)
> Jun 26 06:08:57   at 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingKeyedState(RescaleCheckpointManuallyITCase.java:138)
> Jun 26 06:08:57   at 
> org.apache.flink.test.checkpointing.RescaleCheckpointManuallyITCase.testCheckpointRescalingOutKeyedState(RescaleCheckpointManuallyITCase.java:116)
> Jun 26 06:08:57   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31567) Promote release 1.17

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31567:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Promote release 1.17
> 
>
> Key: FLINK-31567
> URL: https://issues.apache.org/jira/browse/FLINK-31567
> Project: Flink
>  Issue Type: New Feature
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> Once the release has been finalized (FLINK-31562), the last step of the 
> process is to promote the release within the project and beyond. Please wait 
> for 24h after finalizing the release in accordance with the [ASF release 
> policy|http://www.apache.org/legal/release-policy.html#release-announcements].
> *Final checklist to declare this issue resolved:*
>  # Website pull request to [list the 
> release|http://flink.apache.org/downloads.html] merged
>  # Release announced on the user@ mailing list.
>  # Blog post published, if applicable.
>  # Release recorded in 
> [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
>  # Release announced on social media.
>  # Completion declared on the dev@ mailing list.
>  # Update Homebrew: 
> [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] (seems to be done 
> automatically - at least for minor releases  for both minor and major 
> releases)
>  # Update quickstart scripts in {{{}flink-web{}}}, under the {{q/}} directory
>  # Updated the japicmp configuration
>  ** corresponding SNAPSHOT branch japicmp reference version set to the just 
> released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
> enabled
>  ** (minor version release only) master branch japicmp reference version set 
> to the just released version
>  ** (minor version release only) master branch japicmp exclusions have been 
> cleared
>  # Update the list of previous version in {{docs/config.toml}} on the master 
> branch.
>  # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch 
> of the _now deprecated_ Flink version (i.e. 1.15 if 1.17.0 is released)
>  # Update stable and master alias in 
> [https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-18568) Add Support for Azure Data Lake Store Gen 2 in File Sink

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18568:
---
Labels: auto-deprioritized-major stale-assigned  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add Support for Azure Data Lake Store Gen 2 in File Sink
> 
>
> Key: FLINK-18568
> URL: https://issues.apache.org/jira/browse/FLINK-18568
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Israel Ekpo
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-assigned
>
> The objective of this improvement is to add support for Azure Data Lake Store 
> Gen 2 (ADLS Gen2) [2] as one of the supported filesystems for the FileSink [1]
> [1] 
> https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/datastream/filesystem/#file-sink
> [2] https://hadoop.apache.org/docs/current/hadoop-azure/abfs.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32393) NettyClientServerSslTest.testSslPinningForInvalidFingerprint fails with Address already in use

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32393:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> NettyClientServerSslTest.testSslPinningForInvalidFingerprint fails with 
> Address already in use
> --
>
> Key: FLINK-32393
> URL: https://issues.apache.org/jira/browse/FLINK-32393
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.17.2
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50162=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=7794
> fails with
> {noformat}
> Jun 19 05:40:33 [ERROR] Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 10.095 s <<< FAILURE! - in 
> org.apache.flink.runtime.io.network.netty.NettyClientServerSslTest
> Jun 19 05:40:33 [ERROR] 
> NettyClientServerSslTest.testSslPinningForInvalidFingerprint  Time elapsed: 
> 1.236 s  <<< ERROR!
> Jun 19 05:40:33 
> org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException:
>  bind(..) failed: Address already in use
> Jun 19 05:40:33 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32604) PyFlink end-to-end fails with kafka-server-stop.sh: No such file or directory

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32604:
---
Labels: stale-critical test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 14 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> PyFlink end-to-end fails  with kafka-server-stop.sh: No such file or 
> directory 
> ---
>
> Key: FLINK-32604
> URL: https://issues.apache.org/jira/browse/FLINK-32604
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: stale-critical, test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51253=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=7883
> fails as
> {noformat}
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/kafka-common.sh: line 
> 117: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-27379214502/kafka_2.12-3.2.3/bin/kafka-server-stop.sh:
>  No such file or directory
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/kafka-common.sh: line 
> 121: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-27379214502/kafka_2.12-3.2.3/bin/zookeeper-server-stop.sh:
>  No such file or directory
> Jul 13 19:43:07 [FAIL] Test script contains errors.
> Jul 13 19:43:07 Checking of logs skipped.
> Jul 13 19:43:07 
> Jul 13 19:43:07 [FAIL] 'PyFlink end-to-end test' failed after 0 minutes and 
> 40 seconds! Test exited with exit code 1
> Jul 13 19:43:07 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31755) ROW function can not work with RewriteIntersectAllRule

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31755:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> ROW function can not work with RewriteIntersectAllRule
> --
>
> Key: FLINK-31755
> URL: https://issues.apache.org/jira/browse/FLINK-31755
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Aitozi
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> Reproduce case:
> {code:java}
> create table row_sink (
>   `b` ROW
> ) with (
>   'connector' = 'values'
> )
> util.verifyRelPlanInsert(
> "INSERT INTO row_sink " +
>   "SELECT ROW(a, b) FROM complex_type_src intersect all " +
>   "SELECT ROW(c, d) FROM complex_type_src ")
> {code}
> It will fails with 
> {code:java}
> Caused by: java.lang.IllegalArgumentException: Type mismatch:
> rel rowtype: RecordType(RecordType:peek_no_expand(VARCHAR(2147483647) 
> CHARACTER SET "UTF-16LE" EXPR$0, INTEGER EXPR$1) NOT NULL EXPR$0) NOT NULL
> equiv rowtype: RecordType(RecordType(VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE" EXPR$0, INTEGER EXPR$1) NOT NULL EXPR$0) NOT NULL
> Difference:
> EXPR$0: RecordType:peek_no_expand(VARCHAR(2147483647) CHARACTER SET 
> "UTF-16LE" EXPR$0, INTEGER EXPR$1) NOT NULL -> RecordType(VARCHAR(2147483647) 
> CHARACTER SET "UTF-16LE" EXPR$0, INTEGER EXPR$1) NOT NULL
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:592)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:613)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:144)
>   ... 68 more
> {code}
> The reason is:
> ROW function will generates the {{FULLY_QUALIFIED}} type. But after the 
> {{RewriteIntersectAllRule}} optimization, it will produce the 
> {{PEEK_FIELDS_NO_EXPAND}}. So the volcano planner complains with type 
> mismatch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31928) flink-kubernetes works not properly in k8s with IPv6 stack

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31928:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> flink-kubernetes works not properly in k8s with IPv6 stack
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
> Environment: Kubernetes of IPv6 stack.
>Reporter: Yi Cai
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
> kubernetes-client to latest version, and release a new version of 
> flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30743) Improve Kubernetes HA Services docs

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-30743:
---
  Labels: auto-deprioritized-minor pull-request-available  (was: 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Improve Kubernetes HA Services docs
> ---
>
> Key: FLINK-30743
> URL: https://issues.apache.org/jira/browse/FLINK-30743
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Wolfgang Buchner
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
>
> i recently tried to setup a flink standalone session cluster with kubernetes 
> HA and needed to adjust configmap RBACs settings via try & error because the 
> documentation of:
>  * 
> [https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/ha/kubernetes_ha/]
>  * 
> [https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/resource-providers/standalone/kubernetes/#high-availability-with-standalone-kubernetes]
> didn't state everything which was needed. E.g. for configmap resource the 
> verb "watch" is essential.
> With
> {color:#569cd6}apiVersion{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}v1{color}
> {color:#569cd6}kind{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}ServiceAccount{color}
> {color:#569cd6}metadata{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}  {color}{color:#569cd6}labels{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}    {color}{color:#569cd6}app{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}flink{color}
> {color:#d4d4d4}  {color}{color:#569cd6}name{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}flink-service-account{color}
> {color:#d4d4d4}  {color}{color:#569cd6}namespace{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}lakehouse{color}
> {color:#d4d4d4}---{color}
> {color:#569cd6}apiVersion{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}rbac.authorization.k8s.io/v1{color}
> {color:#569cd6}kind{color}{color:#d4d4d4}: {color}{color:#ce9178}Role{color}
> {color:#569cd6}metadata{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}  {color}{color:#569cd6}labels{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}    {color}{color:#569cd6}app{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}flink{color}
> {color:#d4d4d4}  {color}{color:#569cd6}name{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}flink-role-binding-flink{color}
> {color:#d4d4d4}  {color}{color:#569cd6}namespace{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}lakehouse{color}
> {color:#569cd6}rules{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}- {color}{color:#569cd6}apiGroups{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}  - {color}{color:#ce9178}""{color}
> {color:#d4d4d4}  {color}{color:#569cd6}resources{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}  - {color}{color:#ce9178}configmaps{color}
> {color:#d4d4d4}  {color}{color:#569cd6}verbs{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}  - {color}{color:#ce9178}get{color}
> {color:#d4d4d4}  - {color}{color:#ce9178}create{color}
> {color:#d4d4d4}  - {color}{color:#ce9178}delete{color}
> {color:#d4d4d4}  - {color}{color:#ce9178}update{color}
> {color:#d4d4d4}  - {color}{color:#ce9178}list{color}
> {color:#d4d4d4}  - {color}{color:#ce9178}watch{color}
> {color:#d4d4d4}---{color}
> {color:#569cd6}apiVersion{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}rbac.authorization.k8s.io/v1{color}
> {color:#569cd6}kind{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}RoleBinding{color}
> {color:#569cd6}metadata{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}  {color}{color:#569cd6}labels{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}    {color}{color:#569cd6}app{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}flink{color}
> {color:#d4d4d4}  {color}{color:#569cd6}name{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}flink-role-binding-default{color}
> {color:#d4d4d4}  {color}{color:#569cd6}namespace{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}lakehouse{color}
> {color:#569cd6}roleRef{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}  {color}{color:#569cd6}apiGroup{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}rbac.authorization.k8s.io{color}
> {color:#d4d4d4}  {color}{color:#569cd6}kind{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}Role{color}
> {color:#d4d4d4}  {color}{color:#569cd6}name{color}{color:#d4d4d4}: 
> {color}{color:#ce9178}flink-role-binding-flink{color}
> {color:#569cd6}subjects{color}{color:#d4d4d4}:{color}
> {color:#d4d4d4}- {color}{color:#569cd6}kind{color}{color:#d4d4d4}: 
> 

[jira] [Updated] (FLINK-22091) env.java.home option didn't take effect in resource negotiator

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22091:
---
Labels: auto-deprioritized-major auto-deprioritized-minor stale-assigned  
(was: auto-deprioritized-major auto-deprioritized-minor)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> env.java.home option didn't take effect in resource negotiator
> --
>
> Key: FLINK-22091
> URL: https://issues.apache.org/jira/browse/FLINK-22091
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Affects Versions: 1.11.1, 1.12.2
>Reporter: zlzhang0122
>Assignee: Samrat Deb
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> stale-assigned
>
> If we have set the value of env.java.home in flink-conf.yaml, it will take 
> effect in standalone mode, but it won't take effect in resource negotiator 
> such as yarn, kubernetes, etc.. Maybe we can do some change and make it take 
> effect?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31992) FlinkKafkaConsumer API is suggested to use as part of documentation, when that API is deprecated for flink version 1.14

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31992:
---
  Labels: auto-deprioritized-major documentation documentation-update 
good-first-issue newbie  (was: documentation documentation-update 
good-first-issue newbie stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> FlinkKafkaConsumer API is suggested to use as part of documentation, when 
> that API is deprecated for flink version 1.14
> ---
>
> Key: FLINK-31992
> URL: https://issues.apache.org/jira/browse/FLINK-31992
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.14.2
>Reporter: Sandesh Mendan
>Priority: Minor
>  Labels: auto-deprioritized-major, documentation, 
> documentation-update, good-first-issue, newbie
>
> In Flink version 1.14, even though the API class FlinkKafkaConsumer had been 
> [deprecated|https://nightlies.apache.org/flink/flink-docs-release-1.14/api/java/],
>  the official 
> [documentation|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/event-time/generating_watermarks/#watermark-strategies-and-the-kafka-connector]
>  suggests that API to use.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-17488) JdbcSink has to support setting autoCommit mode of DB

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17488:
---
Labels: auto-deprioritized-major auto-deprioritized-minor 
pull-request-available stale-assigned  (was: auto-deprioritized-major 
auto-deprioritized-minor pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> JdbcSink has to support setting autoCommit mode of DB
> -
>
> Key: FLINK-17488
> URL: https://issues.apache.org/jira/browse/FLINK-17488
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Khokhlov Pavel
>Assignee: Khokhlov Pavel
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available, stale-assigned
>
> Just played with new
> {noformat}
> org.apache.flink.api.java.io.jdbc.JdbcSink{noformat}
> ({{1.11-SNAPSHOT)}}
> [(https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/jdbc.html|https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/jdbc.html])
> And batch mode with mysql driver (8.0.19).
> Noticed that *JdbcSink* supports only *autoCommit true* and developer cannot 
> change that behaviour. But it's very important from Transactional and 
> Performance point of view to support autoCommit {color:#00875a}*false* 
> {color:#172b4d}and call commit explicitly. {color}{color}
>  When a connection is created, it is in auto-commit mode. This means that 
> each individual SQL statement is treated as a transaction and is 
> automatically committed right after it is executed.
> For example Confluent connector disable it by default.
> [https://github.com/confluentinc/kafka-connect-jdbc/blob/da9619af1d7442dd91793dbc4dc65b8e7414e7b5/src/main/java/io/confluent/connect/jdbc/sink/JdbcDbWriter.java#L50]
>  
> As I see you added it only for JDBCInputFormat in: FLINK-12198
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32101) FlinkKafkaInternalProducerITCase.testInitTransactionId test failed

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32101:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> FlinkKafkaInternalProducerITCase.testInitTransactionId test failed
> --
>
> Key: FLINK-32101
> URL: https://issues.apache.org/jira/browse/FLINK-32101
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> FlinkKafkaInternalProducerITCase.testInitTransactionId test failed.
>   Caused by: org.apache.kafka.common.KafkaException: Unexpected error in 
> InitProducerIdResponse; The request timed out.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48990=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203=22973
> {code:java}
> Caused by: org.apache.kafka.common.KafkaException: 
> org.apache.kafka.common.KafkaException: Unexpected error in 
> InitProducerIdResponse; The request timed out.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:593)
>   at 
> java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
>   at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735)
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:173)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
>   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:650)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.abortTransactions(FlinkKafkaProducer.java:1290)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.initializeState(FlinkKafkaProducer.java:1216)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:189)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:171)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:95)
>   at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:274)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:747)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.call(StreamTaskActionExecutor.java:100)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:722)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:688)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.KafkaException: Unexpected error in 
> InitProducerIdResponse; The request timed out.
>   at 
> org.apache.kafka.clients.producer.internals.TransactionManager$InitProducerIdHandler.handleResponse(TransactionManager.java:1418)
>   at 
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1322)
>   

[jira] [Updated] (FLINK-31595) MiniBatchLocalGroupAggFunction produces wrong aggregate results with state clean up

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31595:
---
  Labels: auto-deprioritized-critical pull-request-available  (was: 
pull-request-available stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> MiniBatchLocalGroupAggFunction produces wrong aggregate results with state 
> clean up
> ---
>
> Key: FLINK-31595
> URL: https://issues.apache.org/jira/browse/FLINK-31595
> Project: Flink
>  Issue Type: Bug
>Reporter: Bo Cui
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available
>
> If the upstream operator supports retract data, and the first data in a batch 
> may be retract data, and the retract data should be ignored.
> https://github.com/apache/flink/blob/a64781b1ef8f129021bdcddd3b07548e6caa4a72/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/aggregate/MiniBatchLocalGroupAggFunction.java#L68



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32036) TableEnvironmentTest.test_explain is unstable on azure ci

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32036:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> TableEnvironmentTest.test_explain is unstable on azure ci
> -
>
> Key: FLINK-32036
> URL: https://issues.apache.org/jira/browse/FLINK-32036
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.17.1
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> it's failed on ci (1.17 branch so far)
> {noformat}
> May 07 01:51:35 === FAILURES 
> ===
> May 07 01:51:35 __ TableEnvironmentTest.test_explain 
> ___
> May 07 01:51:35 
> May 07 01:51:35 self = 
>  testMethod=test_explain>
> May 07 01:51:35 
> May 07 01:51:35 def test_explain(self):
> May 07 01:51:35 schema = RowType() \
> May 07 01:51:35 .add('a', DataTypes.INT()) \
> May 07 01:51:35 .add('b', DataTypes.STRING()) \
> May 07 01:51:35 .add('c', DataTypes.STRING())
> May 07 01:51:35 t_env = self.t_env
> May 07 01:51:35 t = t_env.from_elements([], schema)
> May 07 01:51:35 result = t.select(t.a + 1, t.b, t.c)
> May 07 01:51:35 
> May 07 01:51:35 >   actual = result.explain()
> May 07 01:51:35 
> May 07 01:51:35 pyflink/table/tests/test_table_environment_api.py:66
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48766=logs=bf5e383b-9fd3-5f02-ca1c-8f788e2e76d3=85189c57-d8a0-5c9c-b61d-fc05cfac62cf=25029



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31877) StreamExecutionEnvironmentTests.test_from_collection_with_data_types is unstable

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31877:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> StreamExecutionEnvironmentTests.test_from_collection_with_data_types is 
> unstable
> 
>
> Key: FLINK-31877
> URL: https://issues.apache.org/jira/browse/FLINK-31877
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.1
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> {noformat}
> Apr 21 05:11:45 === FAILURES 
> ===
> Apr 21 05:11:45 _ 
> StreamExecutionEnvironmentTests.test_from_collection_with_data_types _
> Apr 21 05:11:45 
> Apr 21 05:11:45 self = 
>   testMethod=test_from_collection_with_data_types>
> Apr 21 05:11:45 
> Apr 21 05:11:45 def test_from_collection_with_data_types(self):
> Apr 21 05:11:45 # verify from_collection for the collection with 
> single object.
> Apr 21 05:11:45 ds = self.env.from_collection(['Hi', 'Hello'], 
> type_info=Types.STRING())
> Apr 21 05:11:45 ds.add_sink(self.test_sink)
> Apr 21 05:11:45 >   self.env.execute("test from collection with single 
> object")
> Apr 21 05:11:45 
> Apr 21 05:11:45 
> pyflink/datastream/tests/test_stream_execution_environment.py:257: 
> Apr 21 05:11:45 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ _ _ _ _ _ _ _ _ 
> Apr 21 05:11:45 pyflink/datastream/stream_execution_environment.py:764: in 
> execute
> Apr 21 05:11:45 return 
> JobExecutionResult(self._j_stream_execution_environment.execute(j_stream_graph))
> Apr 21 05:11:45 
> .tox/py38-cython/lib/python3.8/site-packages/py4j/java_gateway.py:1321: in 
> __call__
> Apr 21 05:11:45 return_value = get_return_value(
> Apr 21 05:11:45 pyflink/util/exceptions.py:146: in deco
> Apr 21 05:11:45 return f(*a, **kw)
> Apr 21 05:11:45 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ _ _ _ _ _ _ _ _
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48320=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=31864



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32156) Int2AdaptiveHashJoinOperatorTest produced no output for 900s on AZP

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32156:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> Int2AdaptiveHashJoinOperatorTest produced no output for 900s on AZP
> ---
>
> Key: FLINK-32156
> URL: https://issues.apache.org/jira/browse/FLINK-32156
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.17.2
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48892=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=10930
> {noformat}
> May 11 06:25:13 
> ==
> May 11 06:25:13 Process produced no output for 900 seconds.
> May 11 06:25:13 
> ==
> ...
> May 11 06:25:14 "main" #1 prio=5 os_prio=0 tid=0x7f672c00b800 nid=0x4b8 
> waiting on condition [0x7f6735dbd000]
> May 11 06:25:14java.lang.Thread.State: RUNNABLE
> May 11 06:25:14   at 
> org.apache.flink.table.runtime.util.UniformBinaryRowGenerator.next(UniformBinaryRowGenerator.java:90)
> May 11 06:25:14   at 
> org.apache.flink.table.runtime.util.UniformBinaryRowGenerator.next(UniformBinaryRowGenerator.java:27)
> May 11 06:25:14   at 
> org.apache.flink.runtime.operators.testutils.UnionIterator.next(UnionIterator.java:61)
> May 11 06:25:14   at 
> org.apache.flink.table.runtime.operators.join.Int2HashJoinOperatorTestBase.joinAndAssert(Int2HashJoinOperatorTestBase.java:271)
> May 11 06:25:14   at 
> org.apache.flink.table.runtime.operators.join.Int2HashJoinOperatorTestBase.buildJoin(Int2HashJoinOperatorTestBase.java:77)
> May 11 06:25:14   at 
> org.apache.flink.table.runtime.operators.join.Int2AdaptiveHashJoinOperatorTest.testBuildFirstHashLeftOutJoinFallbackToSMJ(Int2AdaptiveHashJoinOperatorTest.java:114)
> May 11 06:25:14   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 11 06:25:14   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 11 06:25:14   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 11 06:25:14   at java.lang.reflect.Method.invoke(Method.java:498)
> May 11 06:25:14   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> May 11 06:25:14   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 11 06:25:14   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32269) CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32269:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP
> 
>
> Key: FLINK-32269
> URL: https://issues.apache.org/jira/browse/FLINK-32269
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49532=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=15797
> {noformat}
> Jun 01 03:40:51 03:40:51.881 [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 104.874 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase
> Jun 01 03:40:51 03:40:51.881 [ERROR] 
> CreateTableAsITCase.testCreateTableAsInStatementSet  Time elapsed: 40.729 s  
> <<< FAILURE!
> Jun 01 03:40:51 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase.testCreateTableAsInStatementSet(CreateTableAsITCase.java:50)
> Jun 01 03:40:51   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 01 03:40:51   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 01 03:40:51   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 01 03:40:51   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32151) 'Run kubernetes pyflink application test' fails while pulling image

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32151:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> 'Run kubernetes pyflink application test' fails while pulling image
> ---
>
> Key: FLINK-32151
> URL: https://issues.apache.org/jira/browse/FLINK-32151
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Deployment / Kubernetes
>Affects Versions: 1.16.2
>Reporter: Sergey Nuyanzin
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {noformat}
> 2023-05-16T13:29:39.0614891Z May 16 13:29:39 Current logs for 
> flink-native-k8s-pyflink-application-1-6f4c9bfc56-cstw7: 
> 2023-05-16T13:29:39.1253736Z Error from server (BadRequest): container 
> "flink-main-container" in pod 
> "flink-native-k8s-pyflink-application-1-6f4c9bfc56-cstw7" is waiting to 
> start: image can't be pulled
> 2023-05-16T13:29:39.2611218Z May 16 13:29:39 deployment.apps 
> "flink-native-k8s-pyflink-application-1" deleted
> 2023-05-16T13:29:39.4214711Z May 16 13:29:39 
> clusterrolebinding.rbac.authorization.k8s.io "flink-role-binding-default" 
> deleted
> 2023-05-16T13:29:40.2644587Z May 16 13:29:40 
> pod/flink-native-k8s-pyflink-application-1-6f4c9bfc56-cstw7 condition met
> 2023-05-16T13:29:40.2664618Z May 16 13:29:40 Stopping minikube ...
> 2023-05-16T13:29:40.3396336Z May 16 13:29:40 * Stopping node "minikube"  ...
> 2023-05-16T13:29:50.7499872Z May 16 13:29:50 * 1 node stopped.
> {noformat}
> it's very similar to https://issues.apache.org/jira/browse/FLINK-28226



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31862) KafkaSinkITCase.testStartFromSavepoint is unstable

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31862:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> KafkaSinkITCase.testStartFromSavepoint is unstable
> --
>
> Key: FLINK-31862
> URL: https://issues.apache.org/jira/browse/FLINK-31862
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Sergey Nuyanzin
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48243=logs=9c5a5fe6-2f39-545e-1630-feb3d8d0a1ba=99b23320-1d05-5741-d63f-9e78473da39e=36611
> {noformat}
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: The request timed out.
> Apr 19 01:42:20   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Apr 19 01:42:20   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Apr 19 01:42:20   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
> Apr 19 01:42:20   at 
> org.apache.flink.connector.kafka.sink.testutils.KafkaSinkExternalContext.createTopic(KafkaSinkExternalContext.java:101)
> Apr 19 01:42:20   ... 111 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31756) KafkaTableITCase.testStartFromGroupOffsetsNone fails due to UnknownTopicOrPartitionException

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31756:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> KafkaTableITCase.testStartFromGroupOffsetsNone fails due to 
> UnknownTopicOrPartitionException
> 
>
> Key: FLINK-31756
> URL: https://issues.apache.org/jira/browse/FLINK-31756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> The following build fails with {{UnknownTopicOrPartitionException}}
> {noformat}
> Dec 03 01:10:59 Multiple Failures (1 failure)
> Dec 03 01:10:59 -- failure 1 --
> Dec 03 01:10:59 [Any cause is instance of class 'class 
> org.apache.kafka.clients.consumer.NoOffsetForPartitionException'] 
> Dec 03 01:10:59 Expecting any element of:
> Dec 03 01:10:59   [java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
> Dec 03 01:10:59   ...(64 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed),
> Dec 03 01:10:59 java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: The request timed out.
> Dec 03 01:10:59   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> Dec 03 01:10:59   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> Dec 03 01:10:59   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
> Dec 03 01:10:59   ...(67 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed),
> Dec 03 01:10:59 org.apache.kafka.common.errors.TimeoutException: The 
> request timed out.
> Dec 03 01:10:59 ]
> Dec 03 01:10:59 to satisfy the given assertions requirements but none did:
> Dec 03 01:10:59 
> Dec 03 01:10:59 java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
> Dec 03 01:10:59   ...(64 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed)
> Dec 03 01:10:59 error: 
> Dec 03 01:10:59 Expecting actual throwable to be an instance of:
> Dec 03 01:10:59   
> org.apache.kafka.clients.consumer.NoOffsetForPartitionException
> Dec 03 01:10:59 but was:
> Dec 03 01:10:59   java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> [...]
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47892=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203=36657



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32553) ClusterEntrypointTest.testCloseAsyncShouldNotDeregisterApp failed on AZP

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32553:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Critical, 
please raise the priority and ask a committer to assign you the issue or revive 
the public discussion.


> ClusterEntrypointTest.testCloseAsyncShouldNotDeregisterApp failed on AZP
> 
>
> Key: FLINK-32553
> URL: https://issues.apache.org/jira/browse/FLINK-32553
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.17.2
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51013=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=7961
> {noformat}
> Jul 06 05:38:37 [ERROR] Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 71.304 s <<< FAILURE! - in 
> org.apache.flink.runtime.entrypoint.ClusterEntrypointTest
> Jul 06 05:38:37 [ERROR] 
> org.apache.flink.runtime.entrypoint.ClusterEntrypointTest.testCloseAsyncShouldNotDeregisterApp
>   Time elapsed: 22.51 s  <<< ERROR!
> Jul 06 05:38:37 
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint TestingEntryPoint.
> Jul 06 05:38:37   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:255)
> Jul 06 05:38:37   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypointTest.startClusterEntrypoint(ClusterEntrypointTest.java:347)
> Jul 06 05:38:37   at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypointTest.testCloseAsyncShouldNotDeregisterApp(ClusterEntrypointTest.java:175)
> Jul 06 05:38:37   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 06 05:38:37   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 06 05:38:37   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 06 05:38:37   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 06 05:38:37   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 06 05:38:37   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 06 05:38:37   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32155) Multiple CIs jobs failed due to "Could not connect to azure.archive.ubuntu.com"

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32155:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Multiple CIs jobs failed due to "Could not connect to 
> azure.archive.ubuntu.com"
> ---
>
> Key: FLINK-32155
> URL: https://issues.apache.org/jira/browse/FLINK-32155
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> The issue is very similar to https://issues.apache.org/jira/browse/FLINK-30921
> the difference is that https://issues.apache.org/jira/browse/FLINK-30921 is 
> for e2e jobs while this one is not
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49065=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=37
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49065=logs=4eda0b4a-bd0d-521a-0916-8285b9be9bb5=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9=37
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49065=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203=37



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31828) List field in a POJO data stream results in table program compilation failure

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31828:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> List field in a POJO data stream results in table program compilation failure
> -
>
> Key: FLINK-31828
> URL: https://issues.apache.org/jira/browse/FLINK-31828
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.1
> Environment: Java 11
> Flink 1.16.1
>Reporter: Vladimir Matveev
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Attachments: MainPojo.java, generated-code.txt, stacktrace.txt
>
>
> Suppose I have a POJO class like this:
> {code:java}
> public class Example {
> private String key;
> private List> values;
> // getters, setters, equals+hashCode omitted
> }
> {code}
> When a DataStream with this class is converted to a table, and some 
> operations are performed on it, it results in an exception which explicitly 
> says that I should file a ticket:
> {noformat}
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
> {noformat}
> Please find the example Java code and the full stack trace attached.
> From the exception and generated code it seems that Flink is upset with the 
> list field being treated as an array - but I cannot have an array type there 
> in the real code.
> Also note that if I _don't_ specify the schema explicitly, it then maps the 
> {{values}} field to a `RAW('java.util.List', '...')` type, which also does 
> not work correctly and fails the job in case of even simplest operations like 
> printing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28847) Typo in FileSinkProgram.java in file-sink-file-test module

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-28847:
---
  Labels: auto-deprioritized-minor easyfix  (was: easyfix stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Typo in FileSinkProgram.java in file-sink-file-test module
> --
>
> Key: FLINK-28847
> URL: https://issues.apache.org/jira/browse/FLINK-28847
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.15.1
>Reporter: Xin Wen
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, easyfix
> Attachments: image-2022-08-07-11-42-25-221.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There is a redundant semicolon in 
> [FileSinkProgram.java|https://github.com/apache/flink/blob/b0859789e7733c73a21e600ec0d595ead730c59d/flink-end-to-end-tests/flink-file-sink-test/src/main/java/org/apache/flink/connector/file/sink/FileSinkProgram.java#L57]
>  which will confuse the users.
> !image-2022-08-07-11-42-25-221.png|width=700,height=290!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29789) Fix flaky tests in CheckpointCoordinatorTest

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-29789:
---
  Labels: auto-deprioritized-minor pull-request-available  (was: 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Fix flaky tests in CheckpointCoordinatorTest
> 
>
> Key: FLINK-29789
> URL: https://issues.apache.org/jira/browse/FLINK-29789
> Project: Flink
>  Issue Type: Bug
>Reporter: Sopan Phaltankar
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
>
> The test 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinatorTest.testTriggerAndDeclineCheckpointComplex
>  is flaky and has the following failure:
> Failures:
> [ERROR] Failures:
> [ERROR]   
> CheckpointCoordinatorTest.testTriggerAndDeclineCheckpointComplex:1054 
> expected:<2> but was:<1>
> I used the tool [NonDex|https://github.com/TestingResearchIllinois/NonDex] to 
> find this flaky test.
> Command: mvn -pl flink-runtime edu.illinois:nondex-maven-plugun:1.1.2:nondex 
> -Dtest=org.apache.flink.runtime.checkpoint.CheckpointCoordinatorTest#testTriggerAndDeclineCheckpointComplex
> I analyzed the assertion failure and found that checkpoint1Id and 
> checkpoint2Id are getting assigned by iterating over a HashMap.
> As we know, iterator() returns elements in a random order 
> [(JavaDoc|https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html#entrySet--])
>  and this might cause test failures for some orders.
> Therefore, to remove this non-determinism, we would change HashMap to 
> LinkedHashMap.
> On further analysis, it was found that the Map is getting initialized on line 
> 1894 of org.apache.flink.runtime.checkpoint.CheckpointCoordinator class.
> After changing from HashMap to LinkedHashMap, the above test is passing 
> without any non-determinism.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32113) TtlMapStateAllEntriesTestContext failure in generic types

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32113:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> TtlMapStateAllEntriesTestContext failure in generic types
> -
>
> Key: FLINK-32113
> URL: https://issues.apache.org/jira/browse/FLINK-32113
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> I have the same test failure in both e2e test runs:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49076=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=160c9ae5-96fd-516e-1c91-deb81f59292a=2924]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49076=logs=bea52777-eaf8-5663-8482-18fbc3630e81=b2642e3a-5b86-574d-4c8a-f7e2842bfb14=2924]
> {code:java}
> 16:36:27.471 [ERROR] 
> /home/vsts/work/1/s/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/TtlMapStateAllEntriesTestContext.java:[49,30]
>  incompatible types: inference variable T0 has incompatible bounds
> equality constraints: java.lang.String,java.lang.Integer,UK,T0,T0
> lower bounds: java.lang.Integer
> [...]
> 16:36:27.495 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
> (default-testCompile) on project flink-runtime: Compilation failure
> 16:36:27.495 [ERROR] 
> /home/vsts/work/1/s/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/TtlMapStateAllEntriesTestContext.java:[49,30]
>  incompatible types: inference variable T0 has incompatible bounds
> 16:36:27.496 [ERROR] equality constraints: 
> java.lang.String,java.lang.Integer,UK,T0,T0
> 16:36:27.496 [ERROR] lower bounds: java.lang.Integer
> 16:36:27.496 [ERROR] -> [Help 1] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32138) SQLClientSchemaRegistryITCase fails with timeout on AZP

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32138:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> SQLClientSchemaRegistryITCase fails with timeout on AZP
> ---
>
> Key: FLINK-32138
> URL: https://issues.apache.org/jira/browse/FLINK-32138
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.16.2
>Reporter: Sergey Nuyanzin
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49174=logs=6e8542d7-de38-5a33-4aca-458d6c87066d=10d6732b-d79a-5c68-62a5-668516de5313=15753
> {{SQLClientSchemaRegistryITCase}} fails on AZP as
> {noformat}
> May 20 03:41:34 [ERROR] 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase  Time 
> elapsed: 600.05 s  <<< ERROR!
> May 20 03:41:34 org.junit.runners.model.TestTimedOutException: test timed out 
> after 10 minutes
> May 20 03:41:34   at 
> java.base@11.0.19/jdk.internal.misc.Unsafe.park(Native Method)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1039)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232)
> May 20 03:41:34   at 
> app//com.github.dockerjava.api.async.ResultCallbackTemplate.awaitCompletion(ResultCallbackTemplate.java:91)
> May 20 03:41:34   at 
> app//org.testcontainers.images.TimeLimitedLoggedPullImageResultCallback.awaitCompletion(TimeLimitedLoggedPullImageResultCallback.java:52)
> May 20 03:41:34   at 
> app//org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:89)
> May 20 03:41:34   at 
> app//org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:28)
> May 20 03:41:34   at 
> app//org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)
> May 20 03:41:34   at 
> app//org.testcontainers.utility.LazyFuture.get(LazyFuture.java:39)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1330)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:640)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:335)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.start(GenericContainer.java:326)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1063)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
> May 20 03:41:34   at 
> app//org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> May 20 03:41:34   at 
> app//org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> May 20 03:41:34   at 
> java.base@11.0.19/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-18397) Translate "Table & SQL Connectors Overview" page into Chinese

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18397:
---
Labels: auto-unassigned pull-request-available stale-assigned  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Translate "Table & SQL Connectors Overview" page into Chinese
> -
>
> Key: FLINK-18397
> URL: https://issues.apache.org/jira/browse/FLINK-18397
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Jrebel
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-assigned
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/
> The markdown file is located in flink/docs/dev/table/connectors/index.zh.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28291) Add kerberos delegation token renewer feature instead of logged from keytab individually

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-28291:
---
  Labels: PatchAvailable auto-deprioritized-minor patch-available  (was: 
PatchAvailable patch-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add kerberos delegation token renewer feature instead of logged from keytab 
> individually
> 
>
> Key: FLINK-28291
> URL: https://issues.apache.org/jira/browse/FLINK-28291
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / YARN
>Affects Versions: 1.13.5
>Reporter: jiulong.zhu
>Priority: Not a Priority
>  Labels: PatchAvailable, auto-deprioritized-minor, patch-available
> Attachments: FLINK-28291.0001.patch
>
>
> h2. 1. Design
> LifeCycle of delegation token in RM:
>  # Container starts with DT given by client.
>  # Enable delegation token renewer by:
>  ## set {{security.kerberos.token.renew.enabled}} true, default false. And
>  ## specify {{security.kerberos.login.keytab}} and 
> {{security.kerberos.login.principal}}
>  # When enabled delegation token renewer, the renewer thread will re-obtain 
> tokens from DelegationTokenProvider(only HadoopFSDelegationTokenProvider 
> now). Then the renewer thread will broadcast new tokens to RM locally, all 
> JMs and all TMs by RPCGateway.
>  # RM process adds new tokens in context by UserGroupInformation.
> LifeCycle of delegation token in JM / TM:
>  # TaskManager starts with keytab stored in remote hdfs.
>  # When registered successfully, JM / TM get the current tokens of RM boxed 
> by {{JobMasterRegistrationSuccess}} / {{{}TaskExecutorRegistrationSuccess{}}}.
>  # JM / TM process add new tokens in context by UserGroupInformation.
> It’s too heavy and unnecessary to retrieval leader of ResourceManager by 
> HAService, so DelegationTokenManager is instanced by ResourceManager. So 
> DelegationToken can hold the reference of ResourceManager, instead of RM 
> RPCGateway or self gateway.
> h2. 2. Test
>  # No local junit test. It’s too heavy to build junit environments including 
> KDC and local hadoop.
>  # Cluster test
> step 1: Specify krb5.conf with short token lifetime(ticket_lifetime, 
> renew_lifetime) when submitting flink application.
> ```
> {{flink run  -yD security.kerberos.token.renew.enabled=true -yD 
> security.kerberos.krb5-conf.path= /home/work/krb5.conf -yD 
> security.kerberos.login.use-ticket-cache=false ...}}
> ```
> step 2: Watch token identifier changelog and synchronizer between rm and 
> worker.
> >> 
> In RM / JM log, 
> 2022-06-28 15:13:03,509 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> New token (HDFS_DELEGATION_TOKEN token 52101 for work on ha-hdfs:newfyyy) 
> created in KerberosDelegationToken, and next schedule delay is 64799880 ms. 
> 2022-06-28 15:13:03,529 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> Updating delegation tokens for current user. 2022-06-28 15:13:04,729 INFO 
> org.apache.flink.runtime.util.HadoopUtils [] - JobMaster receives new token 
> (HDFS_DELEGATION_TOKEN token 52101 for work on ha-hdfs:newfyyy) from RM.
> … 
> 2022-06-29 09:13:03,732 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> New token (HDFS_DELEGATION_TOKEN token 52310 for work on ha-hdfs:newfyyy) 
> created in KerberosDelegationToken, and next schedule delay is 64800045 ms.
> 2022-06-29 09:13:03,805 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> Updating delegation tokens for current user. 
> 2022-06-29 09:13:03,806 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> JobMaster receives new token (HDFS_DELEGATION_TOKEN token 52310 for work on 
> ha-hdfs:newfyyy) from RM.
> >> 
> In TM log, 
> 2022-06-28 15:13:17,983 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> TaskManager receives new token (HDFS_DELEGATION_TOKEN token 52101 for work on 
> ha-hdfs:newfyyy) from RM. 
> 2022-06-28 15:13:18,016 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> Updating delegation tokens for current user. 
> … 
> 2022-06-29 09:13:03,809 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> TaskManager receives new token (HDFS_DELEGATION_TOKEN token 52310 for work on 
> ha-hdfs:newfyyy) from RM.
> 2022-06-29 09:13:03,836 INFO org.apache.flink.runtime.util.HadoopUtils [] - 
> Updating delegation tokens for current user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28750) Whether to add field comment for hive table

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-28750:
---
  Labels: auto-deprioritized-minor pull-request-available  (was: 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Whether to add field comment for hive table
> ---
>
> Key: FLINK-28750
> URL: https://issues.apache.org/jira/browse/FLINK-28750
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.14.5
>Reporter: hehuiyuan
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
> Attachments: image-2022-07-30-15-53-03-754.png, 
> image-2022-07-30-16-36-37-032.png
>
>
> Currently,  I have a hive ddl,as follows
> {code:java}
> "set table.sql-dialect=hive;\n" +
> "CREATE TABLE IF NOT EXISTS myhive.dev.shipu3_test_1125 (\n" +
> "   `id` int COMMENT 'ia',\n" +
> "   `cartdid` bigint COMMENT 'aaa',\n" +
> "   `customer` string COMMENT '',\n" +
> "   `product` string COMMENT '',\n" +
> "   `price` double COMMENT '',\n" +
> "   `dt` STRING COMMENT ''\n" +
> ") PARTITIONED BY (dt STRING) STORED AS TEXTFILE TBLPROPERTIES (\n" +
> "  'streaming-source.enable' = 'false',\n" +
> "  'streaming-source.partition.include' = 'all',\n" +
> "  'lookup.join.cache.ttl' = '12 h'\n" +
> ")"; {code}
> It is parsed as SqlCreateHiveTable by hive dialect parser. But the field 
> commet is lost.
>  
>  
> !image-2022-07-30-16-36-37-032.png|width=777,height=526!
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28837) Translate "Hybrid Source" page of "DataStream Connectors" into Chinese

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-28837:
---
  Labels: auto-deprioritized-minor pull-request-available  (was: 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Translate "Hybrid Source" page of "DataStream Connectors" into Chinese
> --
>
> Key: FLINK-28837
> URL: https://issues.apache.org/jira/browse/FLINK-28837
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Reporter: JasonLee
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
>
> The page url is 
> [https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/connectors/datastream/hybridsource/]
> The markdown file is located in 
> docs/content.zh/docs/connectors/datastream/hybridsource.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28839) Incorrect english sentence in docs/content/docs/dev/dataset/iterations.md

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-28839:
---
  Labels: auto-deprioritized-minor easy-fix starter  (was: easy-fix 
stale-minor starter)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Incorrect english sentence in docs/content/docs/dev/dataset/iterations.md
> -
>
> Key: FLINK-28839
> URL: https://issues.apache.org/jira/browse/FLINK-28839
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Bisvarup Mukherjee
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, easy-fix, starter
> Attachments: Screenshot 2022-08-05 at 5.56.52 PM.png
>
>
> There is this line in the[ iteration 
> documentation|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/dataset/iterations/#example-propagate-minimum-in-graph]
>  that reads like
> _"it can be skipped it in the next workset",_ 
> This line does not look like properly formatted English, it should have been
> _"it can be skipped in the next workset"_
>  
> !Screenshot 2022-08-05 at 5.56.52 PM.png|width=878,height=320!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28358) when debug in local ,throw out "The system time period specification expects Timestamp type but is 'TIMESTAMP_WITH_LOCAL_TIME_ZONE' " exception

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-28358:
---
  Labels: auto-deprioritized-minor debug pull-request-available  (was: 
debug pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> when debug in local ,throw out "The system time period specification expects 
> Timestamp type but is 'TIMESTAMP_WITH_LOCAL_TIME_ZONE' " exception
> ---
>
> Key: FLINK-28358
> URL: https://issues.apache.org/jira/browse/FLINK-28358
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.4
> Environment: maven:3.2.5 maven:3.6.1  maven:3.3.9 
> openjdk:1.8.0_333
> idea:IntelliJ IDEA 2021.3 (Ultimate Edition)
>Reporter: PengfeiChang
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, debug, pull-request-available
> Attachments: image-2022-07-06-20-31-29-743.png
>
>
> h1. subject
> when i debug in local to see the jdbcconnector lookup mechanism and run 
> org.apache.flink.connector.jdbc.table.JdbcLookupTableITCase.testLookup,throw 
> out a exception ,detail as follow:
> {code:java}
> org.apache.flink.table.api.ValidationException: SQL validation failed. From 
> line 1, column 106 to line 1, column 120: The system time period 
> specification expects Timestamp type but is 'TIMESTAMP_WITH_LOCAL_TIME_ZONE'
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:164)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:215)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:736)
>   at 
> org.apache.flink.connector.jdbc.table.JdbcLookupTableITCase.useDynamicTableFactory(JdbcLookupTableITCase.java:195)
>   at 
> org.apache.flink.connector.jdbc.table.JdbcLookupTableITCase.testLookup(JdbcLookupTableITCase.java:81)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at 

[jira] [Updated] (FLINK-30782) Use https for schemaLocations

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-30782:
---
  Labels: auto-deprioritized-minor pull-request-available  (was: 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Use https for schemaLocations
> -
>
> Key: FLINK-30782
> URL: https://issues.apache.org/jira/browse/FLINK-30782
> Project: Flink
>  Issue Type: Technical Debt
>Reporter: Sergey Nuyanzin
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
>
> In poms 
> {code:xml}
> http://maven.apache.org/POM/4.0.0; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/maven-v4_0_0.xsd;>
> {code}
> use https for xsd like  https://maven.apache.org/xsd/maven-4.0.0.xsd



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30817) ClassCastException in TestValuesTableFactory.TestValuesScanTableSourceWithoutProjectionPushDown

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-30817:
---
  Labels: auto-deprioritized-minor pull-request-available  (was: 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> ClassCastException in 
> TestValuesTableFactory.TestValuesScanTableSourceWithoutProjectionPushDown
> ---
>
> Key: FLINK-30817
> URL: https://issues.apache.org/jira/browse/FLINK-30817
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Shuiqiang Chen
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
>
> When applying partitions in 
> TestValuesScanTableSourceWithoutProjectionPushDown with no partition 
> provided, the following code will cause ClassCastException
> {code:java}
>  remainingPartitions = (List>) Collections.emptyMap();
>  this.data.put(Collections.emptyMap(), Collections.emptyList());
> {code}
> {code:java}
> java.lang.ClassCastException: java.util.Collections$EmptyMap cannot be cast 
> to java.util.List
>   at 
> org.apache.flink.table.planner.factories.TestValuesTableFactory$TestValuesScanTableSourceWithoutProjectionPushDown.applyPartitions(TestValuesTableFactory.java:1222)
>   at 
> org.apache.flink.table.planner.plan.abilities.source.PartitionPushDownSpec.apply(PartitionPushDownSpec.java:57)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.PushPartitionIntoTableSourceScanRule.onMatch(PushPartitionIntoTableSourceScanRule.java:183)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:343)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20578) Cannot create empty array using ARRAY[]

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20578:
---
Labels: pull-request-available stale-assigned starter  (was: 
pull-request-available starter)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Cannot create empty array using ARRAY[]
> ---
>
> Key: FLINK-20578
> URL: https://issues.apache.org/jira/browse/FLINK-20578
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: Fabian Hueske
>Assignee: Eric Xiao
>Priority: Major
>  Labels: pull-request-available, stale-assigned, starter
> Fix For: 1.18.0
>
> Attachments: Screen Shot 2022-10-25 at 10.50.42 PM.png, Screen Shot 
> 2022-10-25 at 10.50.47 PM.png, Screen Shot 2022-10-25 at 11.01.06 PM.png, 
> Screen Shot 2022-10-26 at 2.28.49 PM.png, image-2022-10-26-14-42-08-468.png, 
> image-2022-10-26-14-42-57-579.png
>
>
> Calling the ARRAY function without an element (`ARRAY[]`) results in an error 
> message.
> Is that the expected behavior?
> How can users create empty arrays?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32159) Hudi Source throws NPE

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32159:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Hudi Source throws NPE
> --
>
> Key: FLINK-32159
> URL: https://issues.apache.org/jira/browse/FLINK-32159
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0, 1.16.0, 1.17.0, 1.18.0
>Reporter: Bo Cui
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Attachments: image-2023-05-23-14-45-29-151.png
>
>
> spark/hive write hudi, and flink read hudi and job failed. because 
> !image-2023-05-23-14-45-29-151.png!
>  
> The null judgment logic should be added to AbstractColumnReader#readToVector
> https://github.com/apache/flink/blob/119b8c584dc865ee8a40a5c6410dddf8b36bac5a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/vector/reader/AbstractColumnReader.java#LL155C19-L155C20



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32188) Support to "where" query with a fixed-value array and simplify condition for an array-type filed.

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32188:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support to "where" query with a fixed-value array and simplify condition for 
> an array-type filed.
> -
>
> Key: FLINK-32188
> URL: https://issues.apache.org/jira/browse/FLINK-32188
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.2, 1.17.0, 1.16.1
>Reporter: Xin Chen
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Attachments: image-2023-05-25-17-16-02-288.png, 
> image-2023-05-25-20-44-08-834.png, image-2023-05-25-20-44-47-581.png, 
> image-2023-06-06-16-50-10-805.png, image-2023-06-06-16-50-54-467.png, 
> screenshot-1.png, screenshot-10.png, screenshot-11.png, screenshot-12.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png, screenshot-5.png, 
> screenshot-6.png, screenshot-7.png, screenshot-8.png, screenshot-9.png
>
>
> When I customized a data source connector which assumed as image-connector, I 
> met issues while creating a table with ddl to specify a field "URL" as an 
> array type. When submitting an SQL task with Flink, I specified query of this 
> field with a fixed array. For example, "select * from image source where 
> URL=ARRAY ['/flink. jpg', '/flink_1. jpg']", but it couldn't obtain the 
> corresponding predicate filters at all.
> Does the custom connector not support  to query fields of array type with 
> "where"?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20350) Incompatible Connectors due to Guava conflict

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20350:
---
Labels: auto-deprioritized-major auto-deprioritized-minor stale-assigned  
(was: auto-deprioritized-major auto-deprioritized-minor)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Incompatible Connectors due to Guava conflict
> -
>
> Key: FLINK-20350
> URL: https://issues.apache.org/jira/browse/FLINK-20350
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Connectors / Kinesis
>Affects Versions: 1.11.1, 1.11.2
>Reporter: Danny Cranmer
>Assignee: Samrat Deb
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> stale-assigned
>
> *Problem*
> Kinesis and GCP PubSub connector do not work together. The following error is 
> thrown.
> {code}
> java.lang.NoClassDefFoundError: Could not initialize class 
> io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder
>   at 
> org.apache.flink.streaming.connectors.gcp.pubsub.DefaultPubSubSubscriberFactory.getSubscriber(DefaultPubSubSubscriberFactory.java:52)
>  ~[flink-connector-gcp-pubsub_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.connectors.gcp.pubsub.PubSubSource.createAndSetPubSubSubscriber(PubSubSource.java:213)
>  ~[flink-connector-gcp-pubsub_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.connectors.gcp.pubsub.PubSubSource.open(PubSubSource.java:102)
>  ~[flink-connector-gcp-pubsub_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>  ~[flink-core-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:291)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:473)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:92)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:469)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:522)
>  ~[flink-streaming-java_2.11-1.11.1.jar:1.11.1]
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721) 
> ~[flink-runtime_2.11-1.11.1.jar:1.11.1]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546) 
> ~[flink-runtime_2.11-1.11.1.jar:1.11.1]
>   at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_252]
> {code}
> {code}
> 
> org.apache.flink
> 
> flink-connector-gcp-pubsub_${scala.binary.version}
> 1.11.1
> 
> 
>org.apache.flink
> flink-connector-kinesis_${scala.binary.version}
> 1.11.1
> 
> {code}
> *Cause*
> This is caused by a Guava dependency conflict:
> - Kinesis Consumer > {{18.0}}
> - GCP PubSub > {{26.0-android}}
> {{NettyChannelBuilder}} fails to initialise due to missing method in guava:
> - 
> {{com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;CLjava/lang/Object;)V}}
> *Possible Fixes*
> - Align Guava versions
> - Shade Guava in either connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22484) Built-in functions for collections

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22484:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Built-in functions for collections
> --
>
> Key: FLINK-22484
> URL: https://issues.apache.org/jira/browse/FLINK-22484
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0
>
>
> There is a number of built-in functions to work with collections are 
> supported by other vendors. After looking at Postgresql, BigQuery, Spark 
> there was selected a list of more or less generic functions for collections 
> (for more details see [1]).
> Feedback for the doc is  welcome
> [1] 
> [https://docs.google.com/document/d/1nS0Faur9CCop4sJoQ2kMQ2XU1hjg1FaiTSQp2RsZKEE/edit?usp=sharing]
> MAP_KEYS
> MAP_VALUES
> MAP_FROM_ARRAYS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32627) Add support for dynamic time window function

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32627:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add support for dynamic time window function
> 
>
> Key: FLINK-32627
> URL: https://issues.apache.org/jira/browse/FLINK-32627
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Affects Versions: 1.18.0
>Reporter: 张一帆
>Assignee: 张一帆
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> When using windows for calculations, when the logic is frequently modified 
> and adjusted, the entire program needs to be stopped, the code is modified, 
> the program is repackaged and then submitted to the cluster. It is impossible 
> to achieve logic dynamic modification and external dynamic injection. The 
> window information can be obtained from the data to trigger Redistribution of 
> windows to achieve the effect of dynamic windows



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19589) Support per-connector FileSystem configuration

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19589:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Support per-connector FileSystem configuration
> --
>
> Key: FLINK-19589
> URL: https://issues.apache.org/jira/browse/FLINK-19589
> Project: Flink
>  Issue Type: Improvement
>  Components: FileSystems
>Affects Versions: 1.12.0
>Reporter: Padarn Wilson
>Assignee: Josh Mahonin
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: FLINK-19589.patch
>
>
> Currently, options for file systems can only be configured globally. However, 
> in many cases, users would like to configure more fine-grained.
> Either we allow a properties map similar to Kafka or Kinesis properties to 
> our connectors.
> Or something like:
> Management of two properties related S3 Object management:
>  - [Lifecycle configuration 
> |https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html]
>  - [Object 
> tagging|https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.htm]
> Being able to control these is useful for people who want to manage jobs 
> using S3 for checkpointing or job output, but need to control per job level 
> configuration of the tagging/lifecycle for the purposes of auditing or cost 
> control (for example deleting old state from S3)
> Ideally, it would be possible to control this on each object being written by 
> Flink, or at least at a job level.
> _Note_*:* Some related existing properties can be set using the hadoop module 
> using system properties: see for example 
> {code:java}
> fs.s3a.acl.default{code}
> which sets the default ACL on written objects.
> *Solutions*:
> 1) Modify hadoop module:
> The above-linked module could be updated in order to have a new property (and 
> similar for lifecycle)
>  fs.s3a.tags.default
>  which could be a comma separated list of tags to set. For example
> {code:java}
> fs.s3a.acl.default = "jobname:JOBNAME,owner:OWNER"{code}
> This seems like a natural place to put this logic (and is outside of Flink if 
> we decide to go this way. However it does not allow for a sink and checkpoint 
> to have different values for these.
> 2) Expose withTagging from module
> The hadoop module used by Flink's existing filesystem has already exposed put 
> request level tagging (see 
> [this|https://github.com/aws/aws-sdk-java/blob/c06822732612d7208927d2a678073098522085c3/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/model/PutObjectRequest.java#L292]).
>  This could be used in the Flink filesystem plugin to expose these options. A 
> possible approach could be to somehow incorporate it into the file path, e.g.,
> {code:java}
> path = "TAGS:s3://bucket/path"{code}
>  Or possible as an option that can be applied to the checkpoint and sink 
> configurations, e.g.,
> {code:java}
> env.getCheckpointingConfig().setS3Tags(TAGS) {code}
> and similar for a file sink.
> _Note_: The lifecycle can also be managed using the module: see 
> [here|https://docs.aws.amazon.com/AmazonS3/latest/dev/manage-lifecycle-using-java.html].
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22235) Document stability concerns of flamegraphs for heavier jobs

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22235:
---
Labels: auto-deprioritized-major stale-assigned  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Document stability concerns of flamegraphs for heavier jobs
> ---
>
> Key: FLINK-22235
> URL: https://issues.apache.org/jira/browse/FLINK-22235
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Web Frontend
>Reporter: Chesnay Schepler
>Assignee: Alexander Fedulov
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-assigned
> Fix For: 1.18.0
>
>
> The FlameGraph feature added in FLINK-13550 has some known scalability 
> issues, because it issues 1 RPC call per subtask. This may put a lot of 
> pressure on the RPC system, and as such it should be used with caution for 
> heavier jobs until we improved this a bit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30483) Make Avro format support for TIMESTAMP_LTZ

2023-08-18 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-30483:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Make Avro format support for TIMESTAMP_LTZ
> --
>
> Key: FLINK-30483
> URL: https://issues.apache.org/jira/browse/FLINK-30483
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: Mingliang Liu
>Assignee: Jagadesh Adireddi
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> Currently Avro format does not support TIMESTAMP_LTZ (short for 
> TIMESTAMP_WITH_LOCAL_TIME_ZONE) type. Avro 1.10+ introduces local timestamp 
> logic type (both milliseconds and microseconds), see spec [1]. As TIMESTAMP 
> currently only supports milliseconds, we can make TIMESTAMP_LTZ support 
> milliseconds first.
> A related work is to support microseconds, and there is already 
> work-in-progress Jira FLINK-23589 for TIMESTAMP type. We can consolidate the 
> effort or track that separately for TIMESTAMP_LTZ.
> [1] 
> https://avro.apache.org/docs/1.10.2/spec.html#Local+timestamp+%28millisecond+precision%29



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Jiabao-Sun commented on pull request #23218: [FLINK-32854][flink-runtime][JUnit5 Migration] The state package of flink-runtime module

2023-08-18 Thread via GitHub


Jiabao-Sun commented on PR #23218:
URL: https://github.com/apache/flink/pull/23218#issuecomment-1684311049

   Thanks @ferenc-csaky for the detailed review and helpful suggestions.
   Most of the comments are fixed.
   Please help review it again when you have time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] ferenc-csaky commented on a diff in pull request #23218: [FLINK-32854][flink-runtime][JUnit5 Migration] The state package of flink-runtime module

2023-08-18 Thread via GitHub


ferenc-csaky commented on code in PR #23218:
URL: https://github.com/apache/flink/pull/23218#discussion_r1298747760


##
flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java:
##
@@ -355,44 +336,42 @@ public void testKeyGroupedInternalPriorityQueue(boolean 
addAll) throws Exception
 if (addAll) {
 priorityQueue.addAll(asList(elements));
 } else {
-assertTrue(priorityQueue.add(elements[0]));
-assertTrue(priorityQueue.add(elements[1]));
-assertFalse(priorityQueue.add(elements[2]));
-assertFalse(priorityQueue.add(elements[3]));
-assertFalse(priorityQueue.add(elements[4]));
+assertThat(priorityQueue.add(elements[0])).isTrue();
+assertThat(priorityQueue.add(elements[1])).isTrue();
+assertThat(priorityQueue.add(elements[2])).isFalse();
+assertThat(priorityQueue.add(elements[3])).isFalse();
+assertThat(priorityQueue.add(elements[4])).isFalse();
 }
-assertFalse(priorityQueue.isEmpty());
-assertThat(
-priorityQueue.getSubsetForKeyGroup(1),
-containsInAnyOrder(elementA42, elementA44));
-assertThat(
-priorityQueue.getSubsetForKeyGroup(8),
-containsInAnyOrder(elementB1, elementB3));
+assertThat(priorityQueue.isEmpty()).isFalse();
+assertThat(priorityQueue.getSubsetForKeyGroup(1))
+.containsExactlyInAnyOrder(elementA42, elementA44);
+assertThat(priorityQueue.getSubsetForKeyGroup(8))
+.containsExactlyInAnyOrder(elementB1, elementB3);
 
-assertThat(priorityQueue.peek(), equalTo(elementB1));
-assertThat(priorityQueue.poll(), equalTo(elementB1));
-assertThat(priorityQueue.peek(), equalTo(elementB3));
+assertThat(priorityQueue.peek()).isEqualTo(elementB1);
+assertThat(priorityQueue.poll()).isEqualTo(elementB1);
+assertThat(priorityQueue.peek()).isEqualTo(elementB3);
 
 List actualList = new ArrayList<>();
 try (CloseableIterator iterator = 
priorityQueue.iterator()) {
 iterator.forEachRemaining(actualList::add);
 }
 
-assertThat(actualList, containsInAnyOrder(elementB3, elementA42, 
elementA44));
+assertThat(actualList).containsExactlyInAnyOrder(elementB3, 
elementA42, elementA44);
 
-assertEquals(3, priorityQueue.size());
+assertThat(priorityQueue.size()).isEqualTo(3);

Review Comment:
   Right, I did not checked that its a unique class, nvm :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] mtfelisb commented on a diff in pull request #53: [FLINK-26088][Connectors/ElasticSearch] Add Elasticsearch 8.0 support

2023-08-18 Thread via GitHub


mtfelisb commented on code in PR #53:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/53#discussion_r1298739730


##
flink-connector-elasticsearch8/src/main/java/org/apache/flink/connector/elasticsearch/sink/Elasticsearch8Sink.java:
##
@@ -0,0 +1,119 @@
+/*
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+package org.apache.flink.connector.elasticsearch.sink;
+
+import org.apache.flink.connector.base.sink.AsyncSinkBase;
+import org.apache.flink.connector.base.sink.writer.BufferedRequestState;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+import org.apache.http.HttpHost;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collection;
+import java.util.Collections;
+
+public class Elasticsearch8Sink extends AsyncSinkBase {
+private static final Logger LOG = 
LoggerFactory.getLogger(Elasticsearch8Sink.class);
+
+private final String username;
+
+private final String password;
+
+private final HttpHost[] httpHosts;
+
+protected Elasticsearch8Sink(
+ElementConverter converter,
+int maxBatchSize,
+int maxInFlightRequests,
+int maxBufferedRequests,
+long maxBatchSizeInBytes,
+long maxTimeInBufferMS,
+long maxRecordSizeInByte,
+String username,
+String password,
+HttpHost[] httpHosts
+) {
+super(
+converter,
+maxBatchSize,
+maxInFlightRequests,
+maxBufferedRequests,
+maxBatchSizeInBytes,
+maxTimeInBufferMS,
+maxRecordSizeInByte
+);
+
+this.username = username;
+this.password = password;
+this.httpHosts = httpHosts;
+}
+
+@Override
+public StatefulSinkWriter> 
createWriter(
+InitContext context
+) {
+LOG.debug("creating writer");

Review Comment:
   I agree. Just removed it :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Jiabao-Sun commented on a diff in pull request #23218: [FLINK-32854][flink-runtime][JUnit5 Migration] The state package of flink-runtime module

2023-08-18 Thread via GitHub


Jiabao-Sun commented on code in PR #23218:
URL: https://github.com/apache/flink/pull/23218#discussion_r1298702048


##
flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java:
##
@@ -355,44 +336,42 @@ public void testKeyGroupedInternalPriorityQueue(boolean 
addAll) throws Exception
 if (addAll) {
 priorityQueue.addAll(asList(elements));
 } else {
-assertTrue(priorityQueue.add(elements[0]));
-assertTrue(priorityQueue.add(elements[1]));
-assertFalse(priorityQueue.add(elements[2]));
-assertFalse(priorityQueue.add(elements[3]));
-assertFalse(priorityQueue.add(elements[4]));
+assertThat(priorityQueue.add(elements[0])).isTrue();
+assertThat(priorityQueue.add(elements[1])).isTrue();
+assertThat(priorityQueue.add(elements[2])).isFalse();
+assertThat(priorityQueue.add(elements[3])).isFalse();
+assertThat(priorityQueue.add(elements[4])).isFalse();
 }
-assertFalse(priorityQueue.isEmpty());
-assertThat(
-priorityQueue.getSubsetForKeyGroup(1),
-containsInAnyOrder(elementA42, elementA44));
-assertThat(
-priorityQueue.getSubsetForKeyGroup(8),
-containsInAnyOrder(elementB1, elementB3));
+assertThat(priorityQueue.isEmpty()).isFalse();
+assertThat(priorityQueue.getSubsetForKeyGroup(1))
+.containsExactlyInAnyOrder(elementA42, elementA44);
+assertThat(priorityQueue.getSubsetForKeyGroup(8))
+.containsExactlyInAnyOrder(elementB1, elementB3);
 
-assertThat(priorityQueue.peek(), equalTo(elementB1));
-assertThat(priorityQueue.poll(), equalTo(elementB1));
-assertThat(priorityQueue.peek(), equalTo(elementB3));
+assertThat(priorityQueue.peek()).isEqualTo(elementB1);
+assertThat(priorityQueue.poll()).isEqualTo(elementB1);
+assertThat(priorityQueue.peek()).isEqualTo(elementB3);
 
 List actualList = new ArrayList<>();
 try (CloseableIterator iterator = 
priorityQueue.iterator()) {
 iterator.forEachRemaining(actualList::add);
 }
 
-assertThat(actualList, containsInAnyOrder(elementB3, elementA42, 
elementA44));
+assertThat(actualList).containsExactlyInAnyOrder(elementB3, 
elementA42, elementA44);
 
-assertEquals(3, priorityQueue.size());
+assertThat(priorityQueue.size()).isEqualTo(3);

Review Comment:
   I'm afraid we cannot do that change because there's no size assertion of 
org.apache.flink.runtime.stateInternalPriorityQueue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Jiabao-Sun commented on a diff in pull request #23218: [FLINK-32854][flink-runtime][JUnit5 Migration] The state package of flink-runtime module

2023-08-18 Thread via GitHub


Jiabao-Sun commented on code in PR #23218:
URL: https://github.com/apache/flink/pull/23218#discussion_r1298701513


##
flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java:
##
@@ -355,44 +336,42 @@ public void testKeyGroupedInternalPriorityQueue(boolean 
addAll) throws Exception
 if (addAll) {
 priorityQueue.addAll(asList(elements));
 } else {
-assertTrue(priorityQueue.add(elements[0]));
-assertTrue(priorityQueue.add(elements[1]));
-assertFalse(priorityQueue.add(elements[2]));
-assertFalse(priorityQueue.add(elements[3]));
-assertFalse(priorityQueue.add(elements[4]));
+assertThat(priorityQueue.add(elements[0])).isTrue();
+assertThat(priorityQueue.add(elements[1])).isTrue();
+assertThat(priorityQueue.add(elements[2])).isFalse();
+assertThat(priorityQueue.add(elements[3])).isFalse();
+assertThat(priorityQueue.add(elements[4])).isFalse();
 }
-assertFalse(priorityQueue.isEmpty());
-assertThat(
-priorityQueue.getSubsetForKeyGroup(1),
-containsInAnyOrder(elementA42, elementA44));
-assertThat(
-priorityQueue.getSubsetForKeyGroup(8),
-containsInAnyOrder(elementB1, elementB3));
+assertThat(priorityQueue.isEmpty()).isFalse();

Review Comment:
   I'm afraid we cannot do that change because there's not empty assertion of 
`org.apache.flink.runtime.stateInternalPriorityQueue`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] ferenc-csaky commented on a diff in pull request #23211: [FLINK-32835][runtime] Migrate unit tests in "accumulators" and "blob" packages to JUnit5

2023-08-18 Thread via GitHub


ferenc-csaky commented on code in PR #23211:
URL: https://github.com/apache/flink/pull/23211#discussion_r1298692811


##
flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobCacheCleanupTest.java:
##
@@ -317,20 +303,21 @@ public void testPermanentBlobDeferredCleanup() throws 
IOException, InterruptedEx
 }
 
 @Test
-public void testTransientBlobNoJobCleanup() throws Exception {
-testTransientBlobCleanup(null);
+void testTransientBlobNoJobCleanup() throws Exception {
+testTransientBlobCleanup(tempDir, null);
 }
 
 @Test
-public void testTransientBlobForJobCleanup() throws Exception {
-testTransientBlobCleanup(new JobID());
+void testTransientBlobForJobCleanup() throws Exception {
+testTransientBlobCleanup(tempDir, new JobID());
 }
 
 /**
  * Tests that {@link TransientBlobCache} cleans up after a default TTL and 
keeps files which are
  * constantly accessed.
  */
-private void testTransientBlobCleanup(@Nullable final JobID jobId) throws 
Exception {
+private void testTransientBlobCleanup(final Path tempDir, @Nullable final 
JobID jobId)

Review Comment:
   We don't, I think this was the first test and I went with the temp dir as a 
method param the first time, just forgot to adapted this part.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] ferenc-csaky commented on a diff in pull request #23218: [FLINK-32854][flink-runtime][JUnit5 Migration] The state package of flink-runtime module

2023-08-18 Thread via GitHub


ferenc-csaky commented on code in PR #23218:
URL: https://github.com/apache/flink/pull/23218#discussion_r1296023473


##
flink-runtime/src/test/java/org/apache/flink/runtime/state/AsyncSnapshotCallableTest.java:
##
@@ -102,24 +103,20 @@ public void testExceptionRun() throws Exception {
 }
 
 testBlocker.unblockSuccessfully();
-try {
-task.get();
-Assert.fail();
-} catch (ExecutionException ee) {
-Assert.assertEquals(IOException.class, ee.getCause().getClass());
-}
+assertThatThrownBy(task::get)
+.isInstanceOf(ExecutionException.class)
+.hasCauseInstanceOf(IOException.class);
 
 runner.join();
 
-Assert.assertEquals(
-Arrays.asList(METHOD_CALL, METHOD_CLEANUP),
-testAsyncSnapshotCallable.getInvocationOrder());
+assertThat(testAsyncSnapshotCallable.getInvocationOrder())
+.isEqualTo(Arrays.asList(METHOD_CALL, METHOD_CLEANUP));

Review Comment:
   Instead of `isEqualTo` I'd use `containsExactly(METHOD_CALL, METHOD_CLEANUP)`



##
flink-runtime/src/test/java/org/apache/flink/runtime/state/AsyncSnapshotCallableTest.java:
##
@@ -156,40 +148,29 @@ public void testCloseRun() throws Exception {
 
 ownerRegistry.close();
 
-try {
-task.get();
-Assert.fail();
-} catch (CancellationException ignored) {
-}
+
assertThatThrownBy(task::get).isInstanceOf(CancellationException.class);
 
 runner.join();
 
-Assert.assertEquals(
-Arrays.asList(METHOD_CALL, METHOD_CANCEL, METHOD_CLEANUP),
-testAsyncSnapshotCallable.getInvocationOrder());
-Assert.assertTrue(testBlocker.isClosed());
+assertThat(testAsyncSnapshotCallable.getInvocationOrder())
+.isEqualTo(Arrays.asList(METHOD_CALL, METHOD_CANCEL, 
METHOD_CLEANUP));
+assertThat(testBlocker.isClosed()).isTrue();
 }
 
 @Test
-public void testCancelBeforeRun() throws Exception {
+void testCancelBeforeRun() throws Exception {
 
 task.cancel(true);
 
 Thread runner = startTask(task);
 
-try {
-task.get();
-Assert.fail();
-} catch (CancellationException ignored) {
-}
+
assertThatThrownBy(task::get).isInstanceOf(CancellationException.class);
 
 runner.join();
 
-Assert.assertEquals(
-Arrays.asList(METHOD_CANCEL, METHOD_CLEANUP),
-testAsyncSnapshotCallable.getInvocationOrder());
-
-Assert.assertTrue(testProvidedResource.isClosed());
+assertThat(testAsyncSnapshotCallable.getInvocationOrder())
+.isEqualTo(Arrays.asList(METHOD_CANCEL, METHOD_CLEANUP));

Review Comment:
   Same `isEqualTo` -> `containsExactly`.



##
flink-runtime/src/test/java/org/apache/flink/runtime/state/AsyncSnapshotCallableTest.java:
##
@@ -156,40 +148,29 @@ public void testCloseRun() throws Exception {
 
 ownerRegistry.close();
 
-try {
-task.get();
-Assert.fail();
-} catch (CancellationException ignored) {
-}
+
assertThatThrownBy(task::get).isInstanceOf(CancellationException.class);
 
 runner.join();
 
-Assert.assertEquals(
-Arrays.asList(METHOD_CALL, METHOD_CANCEL, METHOD_CLEANUP),
-testAsyncSnapshotCallable.getInvocationOrder());
-Assert.assertTrue(testBlocker.isClosed());
+assertThat(testAsyncSnapshotCallable.getInvocationOrder())
+.isEqualTo(Arrays.asList(METHOD_CALL, METHOD_CANCEL, 
METHOD_CLEANUP));

Review Comment:
   Same `isEqualTo` -> `containsExactly`.



##
flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java:
##
@@ -355,44 +336,42 @@ public void testKeyGroupedInternalPriorityQueue(boolean 
addAll) throws Exception
 if (addAll) {
 priorityQueue.addAll(asList(elements));
 } else {
-assertTrue(priorityQueue.add(elements[0]));
-assertTrue(priorityQueue.add(elements[1]));
-assertFalse(priorityQueue.add(elements[2]));
-assertFalse(priorityQueue.add(elements[3]));
-assertFalse(priorityQueue.add(elements[4]));
+assertThat(priorityQueue.add(elements[0])).isTrue();
+assertThat(priorityQueue.add(elements[1])).isTrue();
+assertThat(priorityQueue.add(elements[2])).isFalse();
+assertThat(priorityQueue.add(elements[3])).isFalse();
+assertThat(priorityQueue.add(elements[4])).isFalse();
 }
-assertFalse(priorityQueue.isEmpty());
-assertThat(
-priorityQueue.getSubsetForKeyGroup(1),
-containsInAnyOrder(elementA42, 

[GitHub] [flink] 1996fanrui commented on a diff in pull request #23227: [FLINK-32840][runtime][JUnit5 Migration] The executiongraph package of flink-runtime module

2023-08-18 Thread via GitHub


1996fanrui commented on code in PR #23227:
URL: https://github.com/apache/flink/pull/23227#discussion_r1298640863


##
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionGraphTestUtils.java:
##
@@ -481,63 +480,61 @@ public static ExecutionAttemptID createExecutionAttemptId(
  * @param outputJobVertices downstream vertices of the verified vertex, 
used to check produced
  * data sets of generated vertex
  */
-public static void verifyGeneratedExecutionJobVertex(
+static void verifyGeneratedExecutionJobVertex(
 ExecutionGraph executionGraph,
 JobVertex originJobVertex,
 @Nullable List inputJobVertices,
 @Nullable List outputJobVertices) {
 
 ExecutionJobVertex ejv = 
executionGraph.getAllVertices().get(originJobVertex.getID());
-assertNotNull(ejv);
+assertThat(ejv).isNotNull();
 
 // verify basic properties
-assertEquals(originJobVertex.getParallelism(), ejv.getParallelism());
-assertEquals(executionGraph.getJobID(), ejv.getJobId());
-assertEquals(originJobVertex.getID(), ejv.getJobVertexId());
-assertEquals(originJobVertex, ejv.getJobVertex());
+
assertThat(originJobVertex.getParallelism()).isEqualTo(ejv.getParallelism());
+assertThat(executionGraph.getJobID()).isEqualTo(ejv.getJobId());
+assertThat(originJobVertex.getID()).isEqualTo(ejv.getJobVertexId());
+assertThat(originJobVertex).isEqualTo(ejv.getJobVertex());
 
 // verify produced data sets
 if (outputJobVertices == null) {
-assertEquals(0, ejv.getProducedDataSets().length);
+assertThat(ejv.getProducedDataSets().length).isZero();
 } else {
-assertEquals(outputJobVertices.size(), 
ejv.getProducedDataSets().length);
+
assertThat(outputJobVertices.size()).isEqualTo(ejv.getProducedDataSets().length);

Review Comment:
   ```suggestion
   
assertThat(outputJobVertices).hasSize(ejv.getProducedDataSets().length);
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphDeploymentTest.java:
##
@@ -221,7 +221,7 @@ void testBuildDeploymentDescriptor() throws Exception {
 Collection consumedPartitions = 
descr.getInputGates();
 
 assertThat(producedPartitions.size()).isEqualTo(2);
-assertThat(consumedPartitions.size()).isEqualTo(1);
+assertThat(consumedPartitions.size()).isOne();

Review Comment:
   ```suggestion
   assertThat(consumedPartitions).hasSize(1);
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionGraphTestUtils.java:
##
@@ -481,63 +480,61 @@ public static ExecutionAttemptID createExecutionAttemptId(
  * @param outputJobVertices downstream vertices of the verified vertex, 
used to check produced
  * data sets of generated vertex
  */
-public static void verifyGeneratedExecutionJobVertex(
+static void verifyGeneratedExecutionJobVertex(
 ExecutionGraph executionGraph,
 JobVertex originJobVertex,
 @Nullable List inputJobVertices,
 @Nullable List outputJobVertices) {
 
 ExecutionJobVertex ejv = 
executionGraph.getAllVertices().get(originJobVertex.getID());
-assertNotNull(ejv);
+assertThat(ejv).isNotNull();
 
 // verify basic properties
-assertEquals(originJobVertex.getParallelism(), ejv.getParallelism());
-assertEquals(executionGraph.getJobID(), ejv.getJobId());
-assertEquals(originJobVertex.getID(), ejv.getJobVertexId());
-assertEquals(originJobVertex, ejv.getJobVertex());
+
assertThat(originJobVertex.getParallelism()).isEqualTo(ejv.getParallelism());
+assertThat(executionGraph.getJobID()).isEqualTo(ejv.getJobId());
+assertThat(originJobVertex.getID()).isEqualTo(ejv.getJobVertexId());
+assertThat(originJobVertex).isEqualTo(ejv.getJobVertex());
 
 // verify produced data sets
 if (outputJobVertices == null) {
-assertEquals(0, ejv.getProducedDataSets().length);
+assertThat(ejv.getProducedDataSets().length).isZero();
 } else {
-assertEquals(outputJobVertices.size(), 
ejv.getProducedDataSets().length);
+
assertThat(outputJobVertices.size()).isEqualTo(ejv.getProducedDataSets().length);
 for (int i = 0; i < outputJobVertices.size(); i++) {
-assertEquals(
-originJobVertex.getProducedDataSets().get(i).getId(),
-ejv.getProducedDataSets()[i].getId());
-assertEquals(
-originJobVertex.getParallelism(),
-ejv.getProducedDataSets()[0].getPartitions().length);
+
assertThat(originJobVertex.getProducedDataSets().get(i).getId())
+

[GitHub] [flink] flinkbot commented on pull request #23241: [FLINK-32837][JUnit5 Migration] Migrate the client and clusterframework packages of flink-runtime module to junit5

2023-08-18 Thread via GitHub


flinkbot commented on PR #23241:
URL: https://github.com/apache/flink/pull/23241#issuecomment-1684164303

   
   ## CI report:
   
   * 08fe96c33b30350550379349723c84025e5c70d1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on pull request #23241: [FLINK-32837][JUnit5 Migration] Migrate the client and clusterframework packages of flink-runtime module to junit5

2023-08-18 Thread via GitHub


1996fanrui commented on PR #23241:
URL: https://github.com/apache/flink/pull/23241#issuecomment-1684163730

   cc @Jiabao-Sun 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32837) [JUnit5 Migration] The client, clusterframework and concurrent packages of flink-runtime module

2023-08-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32837:
---
Labels: pull-request-available  (was: )

> [JUnit5 Migration] The client, clusterframework and concurrent packages of 
> flink-runtime module
> ---
>
> Key: FLINK-32837
> URL: https://issues.apache.org/jira/browse/FLINK-32837
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] 1996fanrui opened a new pull request, #23241: [FLINK-32837][JUnit5 Migration] Migrate the client and clusterframework packages of flink-runtime module to junit5

2023-08-18 Thread via GitHub


1996fanrui opened a new pull request, #23241:
URL: https://github.com/apache/flink/pull/23241

   [FLINK-32837][JUnit5 Migration] Migrate the client and clusterframework 
packages of flink-runtime module to junit5


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Jiabao-Sun commented on a diff in pull request #23204: [FLINK-32855][flink-runtime][JUnit5 Migration] Module: The taskexecutor package of flink-runtime

2023-08-18 Thread via GitHub


Jiabao-Sun commented on code in PR #23204:
URL: https://github.com/apache/flink/pull/23204#discussion_r1298630869


##
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/TaskExecutorITCase.java:
##
@@ -38,42 +38,50 @@
 import org.apache.flink.runtime.testutils.CommonTestUtils;
 import org.apache.flink.runtime.testutils.MiniClusterResource;
 import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
-import org.apache.flink.util.TestLogger;
 import org.apache.flink.util.function.SupplierWithException;
 
-import org.junit.Rule;
-import org.junit.Test;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
 
 import java.io.IOException;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.CountDownLatch;
 import java.util.function.Predicate;
 import java.util.function.Supplier;
 
-import static org.hamcrest.Matchers.is;
-import static org.junit.Assert.assertThat;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Integration tests for the {@link TaskExecutor}. */
-public class TaskExecutorITCase extends TestLogger {
+class TaskExecutorITCase {
 
 private static final int NUM_TMS = 2;
 private static final int SLOTS_PER_TM = 2;
 private static final int PARALLELISM = NUM_TMS * SLOTS_PER_TM;
 
-@Rule
-public final MiniClusterResource miniClusterResource =
+private final MiniClusterResource miniClusterResource =

Review Comment:
   Hi @1996fanrui.
   I refactor the code by using `InternalMiniClusterExtension` and 
`EachCallBackWrapper` and it works fine.
   Please help review it again when you have time.
   Thanks a lot.
   
   ```java
   private static final InternalMiniClusterExtension MINI_CLUSTER_EXTENSION 
=
   new InternalMiniClusterExtension(
   new MiniClusterResourceConfiguration.Builder()
   .setNumberTaskManagers(NUM_TMS)
   .setNumberSlotsPerTaskManager(SLOTS_PER_TM)
   .build());
   
   @RegisterExtension
   private final EachCallbackWrapper miniClusterExtensionWrapper =
   new EachCallbackWrapper<>(
   new CustomExtension() {
   @Override
   public void before(ExtensionContext context) throws 
Exception {
   MINI_CLUSTER_EXTENSION.beforeAll(context);
   }
   
   @Override
   public void after(ExtensionContext context) throws 
Exception {
   MINI_CLUSTER_EXTENSION.afterAll(context);
   }
   });
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] xuzifu666 commented on pull request #23220: [FLINK-32871][SQL] Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-18 Thread via GitHub


xuzifu666 commented on PR #23220:
URL: https://github.com/apache/flink/pull/23220#issuecomment-1684121514

   cc @beyond1920 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wangzzu commented on pull request #23227: [FLINK-32840][runtime][JUnit5 Migration] The executiongraph package of flink-runtime module

2023-08-18 Thread via GitHub


wangzzu commented on PR #23227:
URL: https://github.com/apache/flink/pull/23227#issuecomment-1684110666

   > Thanks @wangzzu for the contribution, and @wangzzu for the review!
   > 
   > > Would you mind updating all change about these improvements?
   > > ```
   > > * Make test classes package-visible and do not extend JUnit4-based 
TestLogger
   > > 
   > > * Update isEqualTo(0) to isZero()
   > > 
   > > * Update isEqualTo(1) to isOne()
   > > ```
   > 
   > I still see a little `isEqualTo(0)` and `isEqualTo(1)`, could you search 
and fix them?
   
   @1996fanrui Thanks for your review, i have updated these and check globally 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] deepyaman commented on a diff in pull request #23141: [FLINK-32758][python] Remove PyFlink dependencies' upper bounds

2023-08-18 Thread via GitHub


deepyaman commented on code in PR #23141:
URL: https://github.com/apache/flink/pull/23141#discussion_r1298543188


##
flink-python/dev/dev-requirements.txt:
##
@@ -15,20 +15,20 @@
 pip>=20.3
 setuptools>=18.0
 wheel
-apache-beam==2.43.0
-cython==0.29.24
+apache-beam>=2.43.0,<2.49.0
+cython>=0.29.24
 py4j==0.10.9.7
 python-dateutil>=2.8.0,<3
-cloudpickle==2.2.0
-avro-python3>=1.8.1,!=1.9.2,<1.10.0
-pandas>=1.3.0,<1.4.0
-pyarrow>=5.0.0,<9.0.0
+cloudpickle>=2.2.0

Review Comment:
   @HuangXingBo Sounds good! I have bound `cloudpickle~=2.2.0` for now, as you 
suggest. :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on a diff in pull request #23199: [FLINK-32858][tests][JUnit5 migration] Migrate flink-runtime/utils tests to JUnit5

2023-08-18 Thread via GitHub


1996fanrui commented on code in PR #23199:
URL: https://github.com/apache/flink/pull/23199#discussion_r1298527676


##
flink-runtime/src/test/java/org/apache/flink/runtime/util/config/memory/ProcessMemoryUtilsTestBase.java:
##
@@ -63,86 +62,83 @@ protected ProcessMemoryUtilsTestBase(
 this.newOptionForLegacyHeapOption = 
checkNotNull(newOptionForLegacyHeapOption);
 }
 
-@Before
-public void setup() {
+@BeforeEach
+void setup() {
 oldEnvVariables = System.getenv();
 }
 
-@After
-public void teardown() {
+@AfterEach
+void teardown() {
 if (oldEnvVariables != null) {
 CommonTestUtils.setEnv(oldEnvVariables, true);
 }
 }
 
 @Test
-public void testGenerateJvmParameters() {
+void testGenerateJvmParameters() {
 ProcessMemorySpec spec = JvmArgTestingProcessMemorySpec.generate();
 String jvmParamsStr = 
ProcessMemoryUtils.generateJvmParametersStr(spec, true);
 Map configs = 
ConfigurationUtils.parseJvmArgString(jvmParamsStr);
 
-assertThat(configs.size(), is(4));
-assertThat(MemorySize.parse(configs.get("-Xmx")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(MemorySize.parse(configs.get("-Xms")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")),
-is(spec.getJvmMetaspaceSize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxDirectMemorySize=")),
-is(spec.getJvmDirectMemorySize()));
+assertThat(configs).hasSize(4);
+
assertThat(MemorySize.parse(configs.get("-Xmx"))).isEqualTo(spec.getJvmHeapMemorySize());
+
assertThat(MemorySize.parse(configs.get("-Xms"))).isEqualTo(spec.getJvmHeapMemorySize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")))
+.isEqualTo(spec.getJvmMetaspaceSize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxDirectMemorySize=")))
+.isEqualTo(spec.getJvmDirectMemorySize());
 }
 
 @Test
-public void testGenerateJvmParametersWithoutDirectMemoryLimit() {
+void testGenerateJvmParametersWithoutDirectMemoryLimit() {
 ProcessMemorySpec spec = JvmArgTestingProcessMemorySpec.generate();
 String jvmParamsStr = 
ProcessMemoryUtils.generateJvmParametersStr(spec, false);
 Map configs = 
ConfigurationUtils.parseJvmArgString(jvmParamsStr);
 
-assertThat(configs.size(), is(3));
-assertThat(MemorySize.parse(configs.get("-Xmx")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(MemorySize.parse(configs.get("-Xms")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")),
-is(spec.getJvmMetaspaceSize()));
-assertThat(configs.containsKey("-XX:MaxDirectMemorySize="), is(false));
+assertThat(configs).hasSize(3);
+
assertThat(MemorySize.parse(configs.get("-Xmx"))).isEqualTo(spec.getJvmHeapMemorySize());
+
assertThat(MemorySize.parse(configs.get("-Xms"))).isEqualTo(spec.getJvmHeapMemorySize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")))
+.isEqualTo(spec.getJvmMetaspaceSize());
+assertThat(configs.containsKey("-XX:MaxDirectMemorySize=")).isFalse();
 }
 
 @Test
-public void testConfigTotalFlinkMemory() {
+void testConfigTotalFlinkMemory() {
 MemorySize totalFlinkMemorySize = MemorySize.parse("1g");
 
 Configuration conf = new Configuration();
 conf.set(options.getTotalFlinkMemoryOption(), totalFlinkMemorySize);
 
 T processSpec = processSpecFromConfig(conf);
-assertThat(processSpec.getTotalFlinkMemorySize(), 
is(totalFlinkMemorySize));
+
assertThat(processSpec.getTotalFlinkMemorySize()).isEqualTo(totalFlinkMemorySize);
 }
 
 @Test
-public void testConfigTotalProcessMemorySize() {
+void testConfigTotalProcessMemorySize() {
 MemorySize totalProcessMemorySize = MemorySize.parse("2g");
 
 Configuration conf = new Configuration();
 conf.set(options.getTotalProcessMemoryOption(), 
totalProcessMemorySize);
 
 T processSpec = processSpecFromConfig(conf);
-assertThat(processSpec.getTotalProcessMemorySize(), 
is(totalProcessMemorySize));
+
assertThat(processSpec.getTotalProcessMemorySize()).isEqualTo(totalProcessMemorySize);
 }
 
 @Test
-public void testExceptionShouldContainRequiredConfigOptions() {
+void testExceptionShouldContainRequiredConfigOptions() {
 try {
 processSpecFromConfig(new Configuration());
 } catch (IllegalConfigurationException e) {
 options.getRequiredFineGrainedOptions()
-.forEach(option -> assertThat(e.getMessage(), 
containsString(option.key(;
-

[GitHub] [flink] 1996fanrui commented on a diff in pull request #23199: [FLINK-32858][tests][JUnit5 migration] Migrate flink-runtime/utils tests to JUnit5

2023-08-18 Thread via GitHub


1996fanrui commented on code in PR #23199:
URL: https://github.com/apache/flink/pull/23199#discussion_r1298445035


##
flink-runtime/src/test/java/org/apache/flink/runtime/util/config/memory/ProcessMemoryUtilsTestBase.java:
##
@@ -63,86 +62,83 @@ protected ProcessMemoryUtilsTestBase(
 this.newOptionForLegacyHeapOption = 
checkNotNull(newOptionForLegacyHeapOption);
 }
 
-@Before
-public void setup() {
+@BeforeEach
+void setup() {
 oldEnvVariables = System.getenv();
 }
 
-@After
-public void teardown() {
+@AfterEach
+void teardown() {
 if (oldEnvVariables != null) {
 CommonTestUtils.setEnv(oldEnvVariables, true);
 }
 }
 
 @Test
-public void testGenerateJvmParameters() {
+void testGenerateJvmParameters() {
 ProcessMemorySpec spec = JvmArgTestingProcessMemorySpec.generate();
 String jvmParamsStr = 
ProcessMemoryUtils.generateJvmParametersStr(spec, true);
 Map configs = 
ConfigurationUtils.parseJvmArgString(jvmParamsStr);
 
-assertThat(configs.size(), is(4));
-assertThat(MemorySize.parse(configs.get("-Xmx")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(MemorySize.parse(configs.get("-Xms")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")),
-is(spec.getJvmMetaspaceSize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxDirectMemorySize=")),
-is(spec.getJvmDirectMemorySize()));
+assertThat(configs).hasSize(4);
+
assertThat(MemorySize.parse(configs.get("-Xmx"))).isEqualTo(spec.getJvmHeapMemorySize());
+
assertThat(MemorySize.parse(configs.get("-Xms"))).isEqualTo(spec.getJvmHeapMemorySize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")))
+.isEqualTo(spec.getJvmMetaspaceSize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxDirectMemorySize=")))
+.isEqualTo(spec.getJvmDirectMemorySize());
 }
 
 @Test
-public void testGenerateJvmParametersWithoutDirectMemoryLimit() {
+void testGenerateJvmParametersWithoutDirectMemoryLimit() {
 ProcessMemorySpec spec = JvmArgTestingProcessMemorySpec.generate();
 String jvmParamsStr = 
ProcessMemoryUtils.generateJvmParametersStr(spec, false);
 Map configs = 
ConfigurationUtils.parseJvmArgString(jvmParamsStr);
 
-assertThat(configs.size(), is(3));
-assertThat(MemorySize.parse(configs.get("-Xmx")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(MemorySize.parse(configs.get("-Xms")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")),
-is(spec.getJvmMetaspaceSize()));
-assertThat(configs.containsKey("-XX:MaxDirectMemorySize="), is(false));
+assertThat(configs).hasSize(3);
+
assertThat(MemorySize.parse(configs.get("-Xmx"))).isEqualTo(spec.getJvmHeapMemorySize());
+
assertThat(MemorySize.parse(configs.get("-Xms"))).isEqualTo(spec.getJvmHeapMemorySize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")))
+.isEqualTo(spec.getJvmMetaspaceSize());
+assertThat(configs.containsKey("-XX:MaxDirectMemorySize=")).isFalse();
 }
 
 @Test
-public void testConfigTotalFlinkMemory() {
+void testConfigTotalFlinkMemory() {
 MemorySize totalFlinkMemorySize = MemorySize.parse("1g");
 
 Configuration conf = new Configuration();
 conf.set(options.getTotalFlinkMemoryOption(), totalFlinkMemorySize);
 
 T processSpec = processSpecFromConfig(conf);
-assertThat(processSpec.getTotalFlinkMemorySize(), 
is(totalFlinkMemorySize));
+
assertThat(processSpec.getTotalFlinkMemorySize()).isEqualTo(totalFlinkMemorySize);
 }
 
 @Test
-public void testConfigTotalProcessMemorySize() {
+void testConfigTotalProcessMemorySize() {
 MemorySize totalProcessMemorySize = MemorySize.parse("2g");
 
 Configuration conf = new Configuration();
 conf.set(options.getTotalProcessMemoryOption(), 
totalProcessMemorySize);
 
 T processSpec = processSpecFromConfig(conf);
-assertThat(processSpec.getTotalProcessMemorySize(), 
is(totalProcessMemorySize));
+
assertThat(processSpec.getTotalProcessMemorySize()).isEqualTo(totalProcessMemorySize);
 }
 
 @Test
-public void testExceptionShouldContainRequiredConfigOptions() {
+void testExceptionShouldContainRequiredConfigOptions() {
 try {
 processSpecFromConfig(new Configuration());
 } catch (IllegalConfigurationException e) {
 options.getRequiredFineGrainedOptions()
-.forEach(option -> assertThat(e.getMessage(), 
containsString(option.key(;
-

[GitHub] [flink] X-czh commented on a diff in pull request #23199: [FLINK-32858][tests][JUnit5 migration] Migrate flink-runtime/utils tests to JUnit5

2023-08-18 Thread via GitHub


X-czh commented on code in PR #23199:
URL: https://github.com/apache/flink/pull/23199#discussion_r1298523038


##
flink-runtime/src/test/java/org/apache/flink/runtime/util/config/memory/ProcessMemoryUtilsTestBase.java:
##
@@ -63,86 +62,83 @@ protected ProcessMemoryUtilsTestBase(
 this.newOptionForLegacyHeapOption = 
checkNotNull(newOptionForLegacyHeapOption);
 }
 
-@Before
-public void setup() {
+@BeforeEach
+void setup() {
 oldEnvVariables = System.getenv();
 }
 
-@After
-public void teardown() {
+@AfterEach
+void teardown() {
 if (oldEnvVariables != null) {
 CommonTestUtils.setEnv(oldEnvVariables, true);
 }
 }
 
 @Test
-public void testGenerateJvmParameters() {
+void testGenerateJvmParameters() {
 ProcessMemorySpec spec = JvmArgTestingProcessMemorySpec.generate();
 String jvmParamsStr = 
ProcessMemoryUtils.generateJvmParametersStr(spec, true);
 Map configs = 
ConfigurationUtils.parseJvmArgString(jvmParamsStr);
 
-assertThat(configs.size(), is(4));
-assertThat(MemorySize.parse(configs.get("-Xmx")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(MemorySize.parse(configs.get("-Xms")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")),
-is(spec.getJvmMetaspaceSize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxDirectMemorySize=")),
-is(spec.getJvmDirectMemorySize()));
+assertThat(configs).hasSize(4);
+
assertThat(MemorySize.parse(configs.get("-Xmx"))).isEqualTo(spec.getJvmHeapMemorySize());
+
assertThat(MemorySize.parse(configs.get("-Xms"))).isEqualTo(spec.getJvmHeapMemorySize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")))
+.isEqualTo(spec.getJvmMetaspaceSize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxDirectMemorySize=")))
+.isEqualTo(spec.getJvmDirectMemorySize());
 }
 
 @Test
-public void testGenerateJvmParametersWithoutDirectMemoryLimit() {
+void testGenerateJvmParametersWithoutDirectMemoryLimit() {
 ProcessMemorySpec spec = JvmArgTestingProcessMemorySpec.generate();
 String jvmParamsStr = 
ProcessMemoryUtils.generateJvmParametersStr(spec, false);
 Map configs = 
ConfigurationUtils.parseJvmArgString(jvmParamsStr);
 
-assertThat(configs.size(), is(3));
-assertThat(MemorySize.parse(configs.get("-Xmx")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(MemorySize.parse(configs.get("-Xms")), 
is(spec.getJvmHeapMemorySize()));
-assertThat(
-MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")),
-is(spec.getJvmMetaspaceSize()));
-assertThat(configs.containsKey("-XX:MaxDirectMemorySize="), is(false));
+assertThat(configs).hasSize(3);
+
assertThat(MemorySize.parse(configs.get("-Xmx"))).isEqualTo(spec.getJvmHeapMemorySize());
+
assertThat(MemorySize.parse(configs.get("-Xms"))).isEqualTo(spec.getJvmHeapMemorySize());
+assertThat(MemorySize.parse(configs.get("-XX:MaxMetaspaceSize=")))
+.isEqualTo(spec.getJvmMetaspaceSize());
+assertThat(configs.containsKey("-XX:MaxDirectMemorySize=")).isFalse();
 }
 
 @Test
-public void testConfigTotalFlinkMemory() {
+void testConfigTotalFlinkMemory() {
 MemorySize totalFlinkMemorySize = MemorySize.parse("1g");
 
 Configuration conf = new Configuration();
 conf.set(options.getTotalFlinkMemoryOption(), totalFlinkMemorySize);
 
 T processSpec = processSpecFromConfig(conf);
-assertThat(processSpec.getTotalFlinkMemorySize(), 
is(totalFlinkMemorySize));
+
assertThat(processSpec.getTotalFlinkMemorySize()).isEqualTo(totalFlinkMemorySize);
 }
 
 @Test
-public void testConfigTotalProcessMemorySize() {
+void testConfigTotalProcessMemorySize() {
 MemorySize totalProcessMemorySize = MemorySize.parse("2g");
 
 Configuration conf = new Configuration();
 conf.set(options.getTotalProcessMemoryOption(), 
totalProcessMemorySize);
 
 T processSpec = processSpecFromConfig(conf);
-assertThat(processSpec.getTotalProcessMemorySize(), 
is(totalProcessMemorySize));
+
assertThat(processSpec.getTotalProcessMemorySize()).isEqualTo(totalProcessMemorySize);
 }
 
 @Test
-public void testExceptionShouldContainRequiredConfigOptions() {
+void testExceptionShouldContainRequiredConfigOptions() {
 try {
 processSpecFromConfig(new Configuration());
 } catch (IllegalConfigurationException e) {
 options.getRequiredFineGrainedOptions()
-.forEach(option -> assertThat(e.getMessage(), 
containsString(option.key(;
-

[jira] [Resolved] (FLINK-32800) Release Testing: Verify FLINK-32548 - Make watermark alignment ready for production use

2023-08-18 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-32800.
-
Resolution: Fixed

> Release Testing: Verify FLINK-32548 - Make watermark alignment ready for 
> production use
> ---
>
> Key: FLINK-32800
> URL: https://issues.apache.org/jira/browse/FLINK-32800
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Rui Fan
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32801) Release Testing: Verify FLINK-32524 - Improve the watermark aggregation performance when enabling the watermark alignment

2023-08-18 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755995#comment-17755995
 ] 

Rui Fan commented on FLINK-32801:
-

FLINK-32524 did a performance improvement about watermark alignment, and I have 
tested on my Local. It works well.:)

> Release Testing: Verify FLINK-32524 - Improve the watermark aggregation 
> performance when enabling the watermark alignment
> -
>
> Key: FLINK-32801
> URL: https://issues.apache.org/jira/browse/FLINK-32801
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Rui Fan
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32801) Release Testing: Verify FLINK-32524 - Improve the watermark aggregation performance when enabling the watermark alignment

2023-08-18 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-32801.
-
Resolution: Fixed

> Release Testing: Verify FLINK-32524 - Improve the watermark aggregation 
> performance when enabling the watermark alignment
> -
>
> Key: FLINK-32801
> URL: https://issues.apache.org/jira/browse/FLINK-32801
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Rui Fan
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32800) Release Testing: Verify FLINK-32548 - Make watermark alignment ready for production use

2023-08-18 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755994#comment-17755994
 ] 

Rui Fan commented on FLINK-32800:
-

FLINK-32548 fixed a series of bugs related to watermark alignment, and I have 
tested on my Local. It works well.:)

> Release Testing: Verify FLINK-32548 - Make watermark alignment ready for 
> production use
> ---
>
> Key: FLINK-32800
> URL: https://issues.apache.org/jira/browse/FLINK-32800
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Rui Fan
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23240: Update local_installation.md

2023-08-18 Thread via GitHub


flinkbot commented on PR #23240:
URL: https://github.com/apache/flink/pull/23240#issuecomment-1683996674

   
   ## CI report:
   
   * 7794a0d671d0106bc7ac34b2c43d9cb30d19c6fa UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] arthurlee opened a new pull request, #23240: Update local_installation.md

2023-08-18 Thread via GitHub


arthurlee opened a new pull request, #23240:
URL: https://github.com/apache/flink/pull/23240

   Add notes for jdk 17 and above is not supported yet
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32806) EmbeddedJobResultStore keeps the non-dirty job entries forever

2023-08-18 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755980#comment-17755980
 ] 

Matthias Pohl commented on FLINK-32806:
---

Thanks for picking it up. I assigned the issue to you.

> EmbeddedJobResultStore keeps the non-dirty job entries forever
> --
>
> Key: FLINK-32806
> URL: https://issues.apache.org/jira/browse/FLINK-32806
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Assignee: hk__lrzy
>Priority: Major
>  Labels: starter
>
> The {{EmbeddedJobResultStore}} keeps the entries of cleaned-up jobs in-memory 
> forever. We might want to add a TTL to have those entries be removed after a 
> certain amount of time to allow maintaining the memory footprint of the 
> {{EmbeddedJobResultStore}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32806) EmbeddedJobResultStore keeps the non-dirty job entries forever

2023-08-18 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-32806:
-

Assignee: hk__lrzy

> EmbeddedJobResultStore keeps the non-dirty job entries forever
> --
>
> Key: FLINK-32806
> URL: https://issues.apache.org/jira/browse/FLINK-32806
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Assignee: hk__lrzy
>Priority: Major
>  Labels: starter
>
> The {{EmbeddedJobResultStore}} keeps the entries of cleaned-up jobs in-memory 
> forever. We might want to add a TTL to have those entries be removed after a 
> certain amount of time to allow maintaining the memory footprint of the 
> {{EmbeddedJobResultStore}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] XComp commented on a diff in pull request #21289: [FLINK-29452] Allow unit tests to be executed independently

2023-08-18 Thread via GitHub


XComp commented on code in PR #21289:
URL: https://github.com/apache/flink/pull/21289#discussion_r1298465139


##
flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/testutils/junit/RetryOnExceptionExtensionTest.java:
##
@@ -37,53 +41,77 @@
 @ExtendWith(RetryExtension.class)
 class RetryOnExceptionExtensionTest {
 
-private static final int NUMBER_OF_RUNS = 3;
+private static final int NUMBER_OF_RETRIES = 3;
 
-private static int runsForSuccessfulTest = 0;
+private static final HashMap methodRunCount = new 
HashMap<>();
 
-private static int runsForTestWithMatchingException = 0;
+private static final HashMap 
verificationCallbackRegistry = new HashMap<>();
 
-private static int runsForTestWithSubclassException = 0;
+@BeforeEach
+void incrementMethodRunCount(TestInfo testInfo) {
+// Set or increment the run count for the unit test method, by the 
method short name.
+// This starts at 1 and is incremented before the test starts.
+testInfo.getTestMethod()
+.ifPresent(
+method ->
+methodRunCount.compute(
+method.getName(), (k, v) -> (v == 
null) ? 1 : v + 1));
+}
+
+private static int assertAndReturnRunCount(TestInfo testInfo) {
+return methodRunCount.get(assertAndReturnTestMethodName(testInfo));
+}
 
-private static int runsForPassAfterOneFailure = 0;
+private static void registerCallbackForTest(TestInfo testInfo, 
Consumer verification) {
+verificationCallbackRegistry.putIfAbsent(
+assertAndReturnTestMethodName(testInfo),
+() -> verification.accept(assertAndReturnRunCount(testInfo)));
+}
+
+private static String assertAndReturnTestMethodName(TestInfo testInfo) {
+return testInfo.getTestMethod()
+.orElseThrow(() -> new AssertionError("No test method is 
provided."))
+.getName();
+}
 
 @AfterAll
 static void verify() {
-assertThat(runsForTestWithMatchingException).isEqualTo(NUMBER_OF_RUNS 
+ 1);
-assertThat(runsForTestWithSubclassException).isEqualTo(NUMBER_OF_RUNS 
+ 1);
-assertThat(runsForSuccessfulTest).isOne();
-assertThat(runsForPassAfterOneFailure).isEqualTo(2);
+for (Runnable verificationCallback : 
verificationCallbackRegistry.values()) {
+verificationCallback.run();
+}
 }
 
 @TestTemplate
-@RetryOnException(times = NUMBER_OF_RUNS, exception = 
IllegalArgumentException.class)
-void testSuccessfulTest() {
-runsForSuccessfulTest++;
+@RetryOnException(times = NUMBER_OF_RETRIES, exception = 
IllegalArgumentException.class)
+void testSuccessfulTest(TestInfo testInfo) {
+registerCallbackForTest(testInfo, total -> assertThat(total).isOne());
 }
 
 @TestTemplate
-@RetryOnException(times = NUMBER_OF_RUNS, exception = 
IllegalArgumentException.class)
-void testMatchingException() {
-runsForTestWithMatchingException++;
-if (runsForTestWithMatchingException <= NUMBER_OF_RUNS) {
+@RetryOnException(times = NUMBER_OF_RETRIES, exception = 
IllegalArgumentException.class)
+void testMatchingException(TestInfo testInfo) {
+registerCallbackForTest(
+testInfo, total -> 
assertThat(total).isEqualTo(NUMBER_OF_RETRIES + 1));
+if (methodRunCount.get("testMatchingException") <= NUMBER_OF_RETRIES) {
 throw new IllegalArgumentException();
 }
 }
 
 @TestTemplate
-@RetryOnException(times = NUMBER_OF_RUNS, exception = 
RuntimeException.class)
-void testSubclassException() {
-runsForTestWithSubclassException++;
-if (runsForTestWithSubclassException <= NUMBER_OF_RUNS) {
+@RetryOnException(times = NUMBER_OF_RETRIES, exception = 
RuntimeException.class)
+void testSubclassException(TestInfo testInfo) {
+registerCallbackForTest(
+testInfo, total -> 
assertThat(total).isEqualTo(NUMBER_OF_RETRIES + 1));
+if (methodRunCount.get("testSubclassException") <= NUMBER_OF_RETRIES) {

Review Comment:
   ```suggestion
   if (assertAndReturnRunCount(testInfo) <= NUMBER_OF_RETRIES) {
   ```



##
flink-test-utils-parent/flink-test-utils-junit/src/test/java/org/apache/flink/testutils/junit/RetryOnExceptionExtensionTest.java:
##
@@ -37,53 +41,77 @@
 @ExtendWith(RetryExtension.class)
 class RetryOnExceptionExtensionTest {
 
-private static final int NUMBER_OF_RUNS = 3;
+private static final int NUMBER_OF_RETRIES = 3;
 
-private static int runsForSuccessfulTest = 0;
+private static final HashMap methodRunCount = new 
HashMap<>();
 
-private static int runsForTestWithMatchingException = 0;
+private static final HashMap 
verificationCallbackRegistry = new HashMap<>();
 
-private static int 

[GitHub] [flink] XComp commented on a diff in pull request #22341: [FLINK-27204][flink-runtime] Refract FileSystemJobResultStore to execute I/O operations on the ioExecutor

2023-08-18 Thread via GitHub


XComp commented on code in PR #22341:
URL: https://github.com/apache/flink/pull/22341#discussion_r1298453225


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMasterServiceLeadershipRunner.java:
##
@@ -392,7 +392,7 @@ private void forwardResultFuture(
 if (isValidLeader(leaderSessionId)) {
 onJobCompletion(jobManagerRunnerResult, throwable);
 } else {
-LOG.trace(
+LOG.debug(

Review Comment:
   You missed `forwardIfValidLeader`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on a diff in pull request #22341: [FLINK-27204][flink-runtime] Refract FileSystemJobResultStore to execute I/O operations on the ioExecutor

2023-08-18 Thread via GitHub


XComp commented on code in PR #22341:
URL: https://github.com/apache/flink/pull/22341#discussion_r1298451802


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMasterServiceLeadershipRunner.java:
##
@@ -392,7 +392,7 @@ private void forwardResultFuture(
 if (isValidLeader(leaderSessionId)) {
 onJobCompletion(jobManagerRunnerResult, throwable);
 } else {
-LOG.trace(
+LOG.debug(

Review Comment:
   You missed `forwardIfValidLeader`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on a diff in pull request #22341: [FLINK-27204][flink-runtime] Refract FileSystemJobResultStore to execute I/O operations on the ioExecutor

2023-08-18 Thread via GitHub


XComp commented on code in PR #22341:
URL: https://github.com/apache/flink/pull/22341#discussion_r1298451802


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMasterServiceLeadershipRunner.java:
##
@@ -392,7 +392,7 @@ private void forwardResultFuture(
 if (isValidLeader(leaderSessionId)) {
 onJobCompletion(jobManagerRunnerResult, throwable);
 } else {
-LOG.trace(
+LOG.debug(

Review Comment:
   You missed `forwardIfValidLeader`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on a diff in pull request #23199: [FLINK-32858][tests][JUnit5 migration] Migrate flink-runtime/utils tests to JUnit5

2023-08-18 Thread via GitHub


1996fanrui commented on code in PR #23199:
URL: https://github.com/apache/flink/pull/23199#discussion_r1298421164


##
flink-runtime/src/test/java/org/apache/flink/runtime/util/BoundedFIFOQueueTest.java:
##
@@ -18,90 +18,83 @@
 
 package org.apache.flink.runtime.util;
 
-import org.apache.flink.util.TestLogger;
+import org.junit.jupiter.api.Test;
 
-import org.junit.Test;
-
-import static org.hamcrest.collection.IsIterableContainingInOrder.contains;
-import static org.hamcrest.collection.IsIterableWithSize.iterableWithSize;
-import static org.hamcrest.core.Is.is;
-import static org.junit.Assert.assertThat;
-import static org.junit.Assert.fail;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
 
 /** {@code BoundedFIFOQueueTest} tests {@link BoundedFIFOQueue}. */
-public class BoundedFIFOQueueTest extends TestLogger {
+class BoundedFIFOQueueTest {
 
-@Test(expected = IllegalArgumentException.class)
-public void testConstructorFailing() {
-new BoundedFIFOQueue<>(-1);
+@Test
+void testConstructorFailing() {
+assertThatThrownBy(() -> new BoundedFIFOQueue<>(-1))
+.isInstanceOf(IllegalArgumentException.class);
 }
 
 @Test
-public void testQueueWithMaxSize0() {
+void testQueueWithMaxSize0() {
 final BoundedFIFOQueue testInstance = new 
BoundedFIFOQueue<>(0);
-assertThat(testInstance, iterableWithSize(0));
+assertThat(testInstance).isEmpty();
 testInstance.add(1);
-assertThat(testInstance, iterableWithSize(0));
+assertThat(testInstance).isEmpty();
 }
 
 @Test
-public void testQueueWithMaxSize2() {
+void testQueueWithMaxSize2() {
 final BoundedFIFOQueue testInstance = new 
BoundedFIFOQueue<>(2);
-assertThat(testInstance, iterableWithSize(0));
+assertThat(testInstance.size()).isZero();

Review Comment:
   ```suggestion
   assertThat(testInstance).isEmpty();
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/util/SerializedThrowableTest.java:
##
@@ -86,49 +84,47 @@ public void testSerialization() {
 String.format(
 "%s: %s",
 userException2.getClass().getName(), 
userException2.getMessage());
-assertEquals(serialized2.getMessage(), result);
+assertThat(serialized2).hasMessage(result);
 
 // copy the serialized throwable and make sure everything still 
works
 SerializedThrowable copy = 
CommonTestUtils.createCopySerializable(serialized);
-assertEquals(
-ExceptionUtils.stringifyException(userException),
-ExceptionUtils.stringifyException(copy));
-assertArrayEquals(userException.getStackTrace(), 
copy.getStackTrace());
+assertThat(ExceptionUtils.stringifyException(userException))
+.isEqualTo(ExceptionUtils.stringifyException(copy));
+
assertThat(userException.getStackTrace()).isEqualTo(copy.getStackTrace());
 
 // deserialize the proper exception
 Throwable deserialized = copy.deserializeError(loader);
-assertEquals(clazz, deserialized.getClass());
+assertThat(deserialized.getClass()).isEqualTo(clazz);
 
 // deserialization with the wrong classloader does not lead to a 
failure
 Throwable wronglyDeserialized = 
copy.deserializeError(getClass().getClassLoader());
-assertEquals(
-ExceptionUtils.stringifyException(userException),
-ExceptionUtils.stringifyException(wronglyDeserialized));
+assertThat(ExceptionUtils.stringifyException(userException))
+
.isEqualTo(ExceptionUtils.stringifyException(wronglyDeserialized));
 } catch (Exception e) {
 e.printStackTrace();
 fail(e.getMessage());
 }
 }
 
 @Test
-public void testCauseChaining() {
+void testCauseChaining() {
 Exception cause2 = new Exception("level2");
 Exception cause1 = new Exception("level1", cause2);
 Exception root = new Exception("level0", cause1);
 
 SerializedThrowable st = new SerializedThrowable(root);
 
-assertEquals("java.lang.Exception: level0", st.getMessage());
+assertThat(st).hasMessage("java.lang.Exception: level0");
 
-assertNotNull(st.getCause());
-assertEquals("java.lang.Exception: level1", 
st.getCause().getMessage());
+assertThat(st.getCause()).isNotNull();
+assertThat(st.getCause()).hasMessage("java.lang.Exception: level1");

Review Comment:
   ```suggestion
   
assertThat(st.getCause()).isNotNull().hasMessage("java.lang.Exception: level1");
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/util/RunnablesTest.java:

[jira] [Updated] (FLINK-32890) Flink app rolled back with old savepoints (3 hours back in time) while some checkpoints have been taken in between

2023-08-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32890:
---
Labels: pull-request-available  (was: )

> Flink app rolled back with old savepoints (3 hours back in time) while some 
> checkpoints have been taken in between
> --
>
> Key: FLINK-32890
> URL: https://issues.apache.org/jira/browse/FLINK-32890
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Nicolas Fraison
>Priority: Major
>  Labels: pull-request-available
>
> Here are all details about the issue:
>  * Deployed new release of a flink app with a new operator
>  * Flink Operator set the app as stable
>  * After some time the app failed and stay in failed state (due to some issue 
> with our kafka clusters)
>  * Finally decided to rollback the new release just in case it could be the 
> root cause of the issue on kafka
>  * Operator detect: {{Job is not running but HA metadata is available for 
> last state restore, ready for upgrade, Deleting JobManager deployment while 
> preserving HA metadata.}}  -> rely on last-state (as we do not disable 
> fallback), no savepoint taken
>  * Flink start JM and deployment of the app. It well find the 3 checkpoints
>  * {{Using '/flink-kafka-job-apache-nico/flink-kafka-job-apache-nico' as 
> Zookeeper namespace.}}
>  * {{Initializing job 'flink-kafka-job' (6b24a364c1905e924a69f3dbff0d26a6).}}
>  * {{Recovering checkpoints from 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints'}.}}
>  * {{Found 3 checkpoints in 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints'}.}}
>  * {{{}Restoring job 6b24a364c1905e924a69f3dbff0d26a6 from Checkpoint 19 @ 
> 1692268003920 for 6b24a364c1905e924a69f3dbff0d26a6 located at 
> }}\{{{}s3p://.../flink-kafka-job-apache-nico/checkpoints/6b24a364c1905e924a69f3dbff0d26a6/chk-19{}}}{{{}.{}}}
>  * Job failed because of the missing operator
> {code:java}
> Job 6b24a364c1905e924a69f3dbff0d26a6 reached terminal state FAILED.
> org.apache.flink.runtime.client.JobInitializationException: Could not start 
> the JobMaster.
> Caused by: java.util.concurrent.CompletionException: 
> java.lang.IllegalStateException: There is no operator for the state 
> f298e8715b4d85e6f965b60e1c848cbe * Job 6b24a364c1905e924a69f3dbff0d26a6 has 
> been registered for cleanup in the JobResultStore after reaching a terminal 
> state.{code}
>  * {{Clean up the high availability data for job 
> 6b24a364c1905e924a69f3dbff0d26a6.}}
>  * {{Removed job graph 6b24a364c1905e924a69f3dbff0d26a6 from 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobgraphs'}.}}
>  * JobManager restart and try to resubmit the job but the job was already 
> submitted so finished
>  * {{Received JobGraph submission 'flink-kafka-job' 
> (6b24a364c1905e924a69f3dbff0d26a6).}}
>  * {{Ignoring JobGraph submission 'flink-kafka-job' 
> (6b24a364c1905e924a69f3dbff0d26a6) because the job already reached a 
> globally-terminal state (i.e. FAILED, CANCELED, FINISHED) in a previous 
> execution.}}
>  * {{Application completed SUCCESSFULLY}}
>  * Finally the operator rollback the deployment and still indicate that {{Job 
> is not running but HA metadata is available for last state restore, ready for 
> upgrade}}
>  * But the job metadata are not anymore there (clean previously)
>  
> {code:java}
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints
> Path 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints
>  doesn't exist
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico
> jobgraphs
> jobs
> leader
> {code}
>  
> The rolled back app from flink operator finally take the last provided 
> savepoint as no metadata/checkpoints are available. But this last savepoint 
> is an old one as during the upgrade the operator decided to rely on 
> last-state (The old savepoint taken is a scheduled one)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] ashangit opened a new pull request, #654: [FLINK-32890] Correct HA patch check for zookeeper metadata store

2023-08-18 Thread via GitHub


ashangit opened a new pull request, #654:
URL: https://github.com/apache/flink-kubernetes-operator/pull/654

   
   
   ## What is the purpose of the change
   
   * This pull request correct the path used to check if HA metadata are 
available in zookeeper. Previous path check was leading to operator taking 
action based on availability of those metadata while the jobs one were not 
available see https://issues.apache.org/jira/browse/FLINK-32890 for some case 
leading to bad action
   
   
   ## Brief change log
   
 - Correct HA patch check for zookeeper metadata store
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is already covered by existing tests, such as 
`FlinkUtilsZookeeperHATest.zookeeperHaMetaDataCheckTest`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
no
 - Core observer or reconciler logic that is regularly executed: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] X-czh commented on pull request #23199: [FLINK-32858][tests][JUnit5 migration] Migrate flink-runtime/utils tests to JUnit5

2023-08-18 Thread via GitHub


X-czh commented on PR #23199:
URL: https://github.com/apache/flink/pull/23199#issuecomment-1683862885

   Thanks @Jiabao-Sun, I've addressed the missing comments and applied an 
improvement to assert exception messages with the 
`assertThat(ex).hasMessage(msg)` pattern.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-32888) File upload runs into EndOfDataDecoderException

2023-08-18 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755943#comment-17755943
 ] 

Chesnay Schepler edited comment on FLINK-32888 at 8/18/23 12:27 PM:


master:
329ee55e2e3ef5a6cf21009f6e321a99b1c91452
9546f8243a24e7b45582b6de6702f819f1d73f97
1.17:
4a368707162760fc39208d4a4d4bac2c6c728802
3e1e32bd89a2ed871e79cd16634c6f66d5ff3db8
1.16:
333a6e67b4bbe78b8c5695f8cd52ea2ea7dc0b20
2d3c142eb036f2cf65b0a4f81caddd7e4c943fd5


was (Author: zentol):
master:
329ee55e2e3ef5a6cf21009f6e321a99b1c91452
9546f8243a24e7b45582b6de6702f819f1d73f97

> File upload runs into EndOfDataDecoderException
> ---
>
> Key: FLINK-32888
> URL: https://issues.apache.org/jira/browse/FLINK-32888
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.16.3, 1.17.2
>
>
> With the right request the FIleUploadHandler runs into a 
> EndOfDataDecoderException although everything is fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32888) File upload runs into EndOfDataDecoderException

2023-08-18 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-32888.

Resolution: Fixed

> File upload runs into EndOfDataDecoderException
> ---
>
> Key: FLINK-32888
> URL: https://issues.apache.org/jira/browse/FLINK-32888
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.16.3, 1.17.2
>
>
> With the right request the FIleUploadHandler runs into a 
> EndOfDataDecoderException although everything is fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] XComp commented on pull request #23238: [BP-1.16][FLINK-32751][runtime] Refactors close handling to CollectSinkOperatorCoordinator

2023-08-18 Thread via GitHub


XComp commented on PR #23238:
URL: https://github.com/apache/flink/pull/23238#issuecomment-1683836093

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #23237: [BP-1.17][FLINK-32751][runtime] Refactors close handling to CollectSinkOperatorCoordinator

2023-08-18 Thread via GitHub


XComp commented on PR #23237:
URL: https://github.com/apache/flink/pull/23237#issuecomment-1683835984

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-26586) FileSystem uses unbuffered read I/O

2023-08-18 Thread Matthias Schwalbe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755946#comment-17755946
 ] 

Matthias Schwalbe commented on FLINK-26586:
---

Hi [~masteryhx] 

As much as I would like to see this ticket solved it is a little bit in trouble 
and probably needs discussion on how to integrate it:
 * There was another ticket FLINK-19911 with a corresponding PR that basically 
solves the same problem, but
 * This ticket/PR was closed for licensing reasons
 * see here [https://github.com/apache/flink/pull/13885#issuecomment-737999505]

In my implementation I didn't refer to the original Java code, however the 
question remains if that is 'distant' enough from the original Java 
implementation

I've observed speed improvements of about factor 30 back when I had to process 
a savepoint of about 60GB.

What should we do with this ticket?

Thias

 

> FileSystem uses unbuffered read I/O
> ---
>
> Key: FLINK-26586
> URL: https://issues.apache.org/jira/browse/FLINK-26586
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor, Connectors / FileSystem, Runtime 
> / Checkpointing
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Matthias Schwalbe
>Priority: Major
> Attachments: BufferedFSDataInputStreamWrapper.java, 
> BufferedLocalFileSystem.java
>
>
> - I found out that, at least when using LocalFileSystem on a windows system, 
> read I/O to load a savepoint is unbuffered,
>  - See example stack [1]
>  - i.e. in order to load only a long in a serializer, it needs to go into 
> kernel mode 8 times and load the 8 bytes one by one
>  - I coded a BufferedFSDataInputStreamWrapper that allows to opt-in buffered 
> reads on any FileSystem implementation
>  - In our setting savepoint load is now 30 times faster
>  - I’ve once seen a Jira ticket as to improve savepoint load time in general 
> (lost the link unfortunately), maybe this approach can help with it
>  - not sure if HDFS has got the same problem
>  - I can contribute my implementation of a BufferedFSDataInputStreamWrapper 
> which can be integrated in any 
> [1] unbuffered reads stack:
> read:207, FileInputStream (java.io)
> read:68, LocalDataInputStream (org.apache.flink.core.fs.local)
> read:50, FSDataInputStreamWrapper (org.apache.flink.core.fs)
> read:42, ForwardingInputStream (org.apache.flink.runtime.util)
> readInt:390, DataInputStream (java.io)
> deserialize:80, BytePrimitiveArraySerializer 
> (org.apache.flink.api.common.typeutils.base.array)
> next:298, FullSnapshotRestoreOperation$KeyGroupEntriesIterator 
> (org.apache.flink.runtime.state.restore)
> next:273, FullSnapshotRestoreOperation$KeyGroupEntriesIterator 
> (org.apache.flink.runtime.state.restore)
> restoreKVStateData:147, RocksDBFullRestoreOperation 
> (org.apache.flink.contrib.streaming.state.restore)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32880) Redundant taskManagers should always be fulfilled in FineGrainedSlotManager

2023-08-18 Thread xiangyu feng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangyu feng closed FLINK-32880.


> Redundant taskManagers should always be fulfilled in FineGrainedSlotManager
> ---
>
> Key: FLINK-32880
> URL: https://issues.apache.org/jira/browse/FLINK-32880
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: xiangyu feng
>Assignee: xiangyu feng
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Currently, if you are using FineGrainedSlotManager, when a redundant 
> taskmanager exit abnormally, no extra taskmanager will be replenished during 
> the
> periodical check in FineGrainedSlotManager.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32888) File upload runs into EndOfDataDecoderException

2023-08-18 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755943#comment-17755943
 ] 

Chesnay Schepler commented on FLINK-32888:
--

master: 9546f8243a24e7b45582b6de6702f819f1d73f97

> File upload runs into EndOfDataDecoderException
> ---
>
> Key: FLINK-32888
> URL: https://issues.apache.org/jira/browse/FLINK-32888
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.16.3, 1.17.2
>
>
> With the right request the FIleUploadHandler runs into a 
> EndOfDataDecoderException although everything is fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32888) File upload runs into EndOfDataDecoderException

2023-08-18 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755943#comment-17755943
 ] 

Chesnay Schepler edited comment on FLINK-32888 at 8/18/23 11:57 AM:


master:
329ee55e2e3ef5a6cf21009f6e321a99b1c91452
9546f8243a24e7b45582b6de6702f819f1d73f97


was (Author: zentol):
master: 9546f8243a24e7b45582b6de6702f819f1d73f97

> File upload runs into EndOfDataDecoderException
> ---
>
> Key: FLINK-32888
> URL: https://issues.apache.org/jira/browse/FLINK-32888
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.16.3, 1.17.2
>
>
> With the right request the FIleUploadHandler runs into a 
> EndOfDataDecoderException although everything is fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32888) File upload runs into EndOfDataDecoderException

2023-08-18 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-32888:
-
Fix Version/s: 1.16.3

> File upload runs into EndOfDataDecoderException
> ---
>
> Key: FLINK-32888
> URL: https://issues.apache.org/jira/browse/FLINK-32888
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.16.3, 1.17.2
>
>
> With the right request the FIleUploadHandler runs into a 
> EndOfDataDecoderException although everything is fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol merged pull request #23229: [FLINK-32888] Handle hasNext() throwing EndOfDataDecoderException

2023-08-18 Thread via GitHub


zentol merged PR #23229:
URL: https://github.com/apache/flink/pull/23229


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #23239: [FLINK-26585] replace implementation of MultiStateKeyIterator with Stream-free implementation

2023-08-18 Thread via GitHub


flinkbot commented on PR #23239:
URL: https://github.com/apache/flink/pull/23239#issuecomment-1683800450

   
   ## CI report:
   
   * 1403e3368cb19f089e3b67511c3db2d008d87b80 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-26585) State Processor API: Loading a state set buffers the whole state set in memory before starting to process

2023-08-18 Thread Matthias Schwalbe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755934#comment-17755934
 ] 

Matthias Schwalbe commented on FLINK-26585:
---

[~masteryhx] :

I opened this PR #23239 for review. 

I didn't do the impact tests on Azure for lack of time resources.

Many thanks for your support

Thias

PS: I'll be off work next week and can answer requests only afterwards.

> State Processor API: Loading a state set buffers the whole state set in 
> memory before starting to process
> -
>
> Key: FLINK-26585
> URL: https://issues.apache.org/jira/browse/FLINK-26585
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor
>Affects Versions: 1.13.0, 1.14.0, 1.15.0
>Reporter: Matthias Schwalbe
>Assignee: Matthias Schwalbe
>Priority: Major
>  Labels: pull-request-available
> Attachments: MultiStateKeyIteratorNoStreams.java
>
>
> * When loading a state, MultiStateKeyIterator load and bufferes the whole 
> state in memory before it event processes a single data point 
>  ** This is absolutely no problem for small state (hence the unit tests work 
> fine)
>  ** MultiStateKeyIterator ctor sets up a java Stream that iterates all state 
> descriptors and flattens all datapoints contained within
>  ** The java.util.stream.Stream#flatMap function causes the buffering of the 
> whole data set when enumerated later on
>  ** See call stack [1] 
>  *** I our case this is 150e6 data points (> 1GiB just for the pointers to 
> the data, let alone the data itself ~30GiB)
>  ** I’m not aware of some instrumentation of Stream in order to avoid the 
> problem, hence
>  ** I coded an alternative implementation of MultiStateKeyIterator that 
> avoids using java Stream,
>  ** I can contribute our implementation (MultiStateKeyIteratorNoStreams)
> [1]
> Streams call stack:
> hasNext:77, RocksStateKeysIterator 
> (org.apache.flink.contrib.streaming.state.iterator)
> next:82, RocksStateKeysIterator 
> (org.apache.flink.contrib.streaming.state.iterator)
> forEachRemaining:116, Iterator (java.util)
> forEachRemaining:1801, Spliterators$IteratorSpliterator (java.util)
> forEach:580, ReferencePipeline$Head (java.util.stream)
> accept:270, ReferencePipeline$7$1 (java.util.stream)  
>  #  Stream flatMap(final Function Stream> var1)
> accept:373, ReferencePipeline$11$1 (java.util.stream) 
>  # Stream peek(final Consumer var1)
> accept:193, ReferencePipeline$3$1 (java.util.stream)      
>  #  Stream map(final Function 
> var1)
> tryAdvance:1359, ArrayList$ArrayListSpliterator (java.util)
> lambda$initPartialTraversalState$0:294, 
> StreamSpliterators$WrappingSpliterator (java.util.stream)
> getAsBoolean:-1, 1528195520 
> (java.util.stream.StreamSpliterators$WrappingSpliterator$$Lambda$57)
> fillBuffer:206, StreamSpliterators$AbstractWrappingSpliterator 
> (java.util.stream)
> doAdvance:161, StreamSpliterators$AbstractWrappingSpliterator 
> (java.util.stream)
> tryAdvance:300, StreamSpliterators$WrappingSpliterator (java.util.stream)
> hasNext:681, Spliterators$1Adapter (java.util)
> hasNext:83, MultiStateKeyIterator (org.apache.flink.state.api.input)
> hasNext:162, KeyedStateReaderOperator$NamespaceDecorator 
> (org.apache.flink.state.api.input.operator)
> reachedEnd:215, KeyedStateInputFormat (org.apache.flink.state.api.input)
> invoke:191, DataSourceTask (org.apache.flink.runtime.operators)
> doRun:776, Task (org.apache.flink.runtime.taskmanager)
> run:563, Task (org.apache.flink.runtime.taskmanager)
> run:748, Thread (java.lang)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26585) State Processor API: Loading a state set buffers the whole state set in memory before starting to process

2023-08-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26585:
---
Labels: pull-request-available  (was: )

> State Processor API: Loading a state set buffers the whole state set in 
> memory before starting to process
> -
>
> Key: FLINK-26585
> URL: https://issues.apache.org/jira/browse/FLINK-26585
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor
>Affects Versions: 1.13.0, 1.14.0, 1.15.0
>Reporter: Matthias Schwalbe
>Assignee: Matthias Schwalbe
>Priority: Major
>  Labels: pull-request-available
> Attachments: MultiStateKeyIteratorNoStreams.java
>
>
> * When loading a state, MultiStateKeyIterator load and bufferes the whole 
> state in memory before it event processes a single data point 
>  ** This is absolutely no problem for small state (hence the unit tests work 
> fine)
>  ** MultiStateKeyIterator ctor sets up a java Stream that iterates all state 
> descriptors and flattens all datapoints contained within
>  ** The java.util.stream.Stream#flatMap function causes the buffering of the 
> whole data set when enumerated later on
>  ** See call stack [1] 
>  *** I our case this is 150e6 data points (> 1GiB just for the pointers to 
> the data, let alone the data itself ~30GiB)
>  ** I’m not aware of some instrumentation of Stream in order to avoid the 
> problem, hence
>  ** I coded an alternative implementation of MultiStateKeyIterator that 
> avoids using java Stream,
>  ** I can contribute our implementation (MultiStateKeyIteratorNoStreams)
> [1]
> Streams call stack:
> hasNext:77, RocksStateKeysIterator 
> (org.apache.flink.contrib.streaming.state.iterator)
> next:82, RocksStateKeysIterator 
> (org.apache.flink.contrib.streaming.state.iterator)
> forEachRemaining:116, Iterator (java.util)
> forEachRemaining:1801, Spliterators$IteratorSpliterator (java.util)
> forEach:580, ReferencePipeline$Head (java.util.stream)
> accept:270, ReferencePipeline$7$1 (java.util.stream)  
>  #  Stream flatMap(final Function Stream> var1)
> accept:373, ReferencePipeline$11$1 (java.util.stream) 
>  # Stream peek(final Consumer var1)
> accept:193, ReferencePipeline$3$1 (java.util.stream)      
>  #  Stream map(final Function 
> var1)
> tryAdvance:1359, ArrayList$ArrayListSpliterator (java.util)
> lambda$initPartialTraversalState$0:294, 
> StreamSpliterators$WrappingSpliterator (java.util.stream)
> getAsBoolean:-1, 1528195520 
> (java.util.stream.StreamSpliterators$WrappingSpliterator$$Lambda$57)
> fillBuffer:206, StreamSpliterators$AbstractWrappingSpliterator 
> (java.util.stream)
> doAdvance:161, StreamSpliterators$AbstractWrappingSpliterator 
> (java.util.stream)
> tryAdvance:300, StreamSpliterators$WrappingSpliterator (java.util.stream)
> hasNext:681, Spliterators$1Adapter (java.util)
> hasNext:83, MultiStateKeyIterator (org.apache.flink.state.api.input)
> hasNext:162, KeyedStateReaderOperator$NamespaceDecorator 
> (org.apache.flink.state.api.input.operator)
> reachedEnd:215, KeyedStateInputFormat (org.apache.flink.state.api.input)
> invoke:191, DataSourceTask (org.apache.flink.runtime.operators)
> doRun:776, Task (org.apache.flink.runtime.taskmanager)
> run:563, Task (org.apache.flink.runtime.taskmanager)
> run:748, Thread (java.lang)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] RainerMatthiasS opened a new pull request, #23239: [FLINK-26585] replace implementation of MultiStateKeyIterator with Stream-free implementation

2023-08-18 Thread via GitHub


RainerMatthiasS opened a new pull request, #23239:
URL: https://github.com/apache/flink/pull/23239

   ## Contribution Checklist
   
 - `Pending:` Make sure that the change passes the automated tests, i.e., 
`mvn clean verify` passes. You can set up Azure Pipelines CI to do that 
following [this 
guide](https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository).
   
   ## What is the purpose of the change
   
   Avoids a known flaw in Stream.flatMap, see 
https://bugs.openjdk.org/browse/JDK-8267359. Avoids potential OOM Error for
   keyed state with many keys when reading them with state processor api.
   
   ## Brief change log
   
 - replaced the original implementation of MultiStateKeyIterator with a 
Java-Stream-free implementation.
 - this fixes a known flaw in Stream.flatMap which leads to complete 
enumeration of inner iterator 
   and buffering of all elements with potential for a OOM error for state 
with many keys
 - manual implementation of nested iterator avoids this problem
   
   ## Verifying this change
   
   This change adds a tests 
MultiStateKeyIteratorTest#testIteratorPullsSingleKeyFromAllDescriptors that 
fails with the previous implementation and succeeds with the new implementation.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
   - new code should be marginally faster (not verified)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? JavaDocs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Jiabao-Sun commented on a diff in pull request #23211: [FLINK-32835][runtime] Migrate unit tests in "accumulators" and "blob" packages to JUnit5

2023-08-18 Thread via GitHub


Jiabao-Sun commented on code in PR #23211:
URL: https://github.com/apache/flink/pull/23211#discussion_r1298315587


##
flink-runtime/src/test/java/org/apache/flink/runtime/accumulators/StringifiedAccumulatorResultTest.java:
##
@@ -71,16 +70,16 @@ public void 
stringifyingResultsShouldIncorporateAccumulatorLocalValueDirectly()
 final StringifiedAccumulatorResult[] results =
 
StringifiedAccumulatorResult.stringifyAccumulatorResults(accumulatorMap);
 
-assertEquals(1, results.length);
+assertThat(results).isNotEmpty();

Review Comment:
   ```suggestion
   assertThat(results).hasSize(1);
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/accumulators/StringifiedAccumulatorResultTest.java:
##
@@ -89,52 +88,50 @@ public void 
stringifyingResultsShouldReportNullLocalValueAsNonnullValueString()
 final StringifiedAccumulatorResult[] results =
 
StringifiedAccumulatorResult.stringifyAccumulatorResults(accumulatorMap);
 
-assertEquals(1, results.length);
+assertThat(results).isNotEmpty();

Review Comment:
   ```suggestion
   assertThat(results).hasSize(1);
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/accumulators/StringifiedAccumulatorResultTest.java:
##
@@ -89,52 +88,50 @@ public void 
stringifyingResultsShouldReportNullLocalValueAsNonnullValueString()
 final StringifiedAccumulatorResult[] results =
 
StringifiedAccumulatorResult.stringifyAccumulatorResults(accumulatorMap);
 
-assertEquals(1, results.length);
+assertThat(results).isNotEmpty();
 
 // Note the use of a String with a content of "null" rather than a 
null value
 final StringifiedAccumulatorResult firstResult = results[0];
-assertEquals(name, firstResult.getName());
-assertEquals("NullBearingAccumulator", firstResult.getType());
-assertEquals("null", firstResult.getValue());
+assertThat(firstResult.getName()).isEqualTo(name);
+assertThat(firstResult.getType()).isEqualTo("NullBearingAccumulator");
+assertThat(firstResult.getValue()).isEqualTo("null");
 }
 
 @Test
-public void 
stringifyingResultsShouldReportNullAccumulatorWithNonnullValueAndTypeString() {
+void 
stringifyingResultsShouldReportNullAccumulatorWithNonnullValueAndTypeString() {
 final String name = "a";
 final Map>> accumulatorMap = 
new HashMap<>();
 accumulatorMap.put(name, null);
 
 final StringifiedAccumulatorResult[] results =
 
StringifiedAccumulatorResult.stringifyAccumulatorResults(accumulatorMap);
 
-assertEquals(1, results.length);
+assertThat(results).isNotEmpty();
 
 // Note the use of String values with content of "null" rather than 
null values
 final StringifiedAccumulatorResult firstResult = results[0];
-assertEquals(name, firstResult.getName());
-assertEquals("null", firstResult.getType());
-assertEquals("null", firstResult.getValue());
+assertThat(firstResult.getName()).isEqualTo(name);
+assertThat(firstResult.getType()).isEqualTo("null");
+assertThat(firstResult.getValue()).isEqualTo("null");
 }
 
 @Test
-public void stringifyingFailureResults() {
+void stringifyingFailureResults() {
 final String name = "a";
 final Map>> accumulatorMap = 
new HashMap<>();
 accumulatorMap.put(name, OptionalFailure.ofFailure(new 
FlinkRuntimeException("Test")));
 
 final StringifiedAccumulatorResult[] results =
 
StringifiedAccumulatorResult.stringifyAccumulatorResults(accumulatorMap);
 
-assertEquals(1, results.length);
+assertThat(results).isNotEmpty();

Review Comment:
   ```suggestion
   assertThat(results).hasSize(1);
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobCacheDeleteTest.java:
##
@@ -117,57 +109,57 @@ private void testDelete(@Nullable JobID jobId1, @Nullable 
JobID jobId2) throws I
 
 // put first BLOB
 TransientBlobKey key1 = (TransientBlobKey) put(server, jobId1, 
data, TRANSIENT_BLOB);
-assertNotNull(key1);
+assertThat(key1).isNotNull();
 
 // put two more BLOBs (same key, other key) for another job ID
 TransientBlobKey key2a = (TransientBlobKey) put(server, jobId2, 
data, TRANSIENT_BLOB);
-assertNotNull(key2a);
+assertThat(key2a).isNotNull();
 BlobKeyTest.verifyKeyDifferentHashEquals(key1, key2a);
 TransientBlobKey key2b = (TransientBlobKey) put(server, jobId2, 
data2, TRANSIENT_BLOB);
-assertNotNull(key2b);
+assertThat(key2b).isNotNull();
 BlobKeyTest.verifyKeyDifferentHashDifferent(key1, key2b);
 
 // issue a DELETE request
-

[GitHub] [flink] lincoln-lil commented on a diff in pull request #23209: [FLINK-32824] Port Calcite's fix for the sql like operator

2023-08-18 Thread via GitHub


lincoln-lil commented on code in PR #23209:
URL: https://github.com/apache/flink/pull/23209#discussion_r1298299922


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/batch/sql/CalcITCase.scala:
##
@@ -410,6 +410,30 @@ class CalcITCase extends BatchTestBase {
 
   @Test
   def testFilterOnString(): Unit = {
+val rows = Seq(row(3, "H.llo"), row(3, "Hello"))

Review Comment:
   nit: put the new test behind the old one



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >