[jira] [Comment Edited] (FLINK-26974) Python EmbeddedThreadDependencyTests.test_add_python_file failed on azure

2023-11-27 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790408#comment-17790408
 ] 

Matthias Pohl edited comment on FLINK-26974 at 11/28/23 7:50 AM:
-

Failure in the GitHub Actions test workflow (FLINK-27075) 
https://github.com/XComp/flink/actions/runs/7006143342/job/19057900392#step:12:23945


was (Author: mapohl):
https://github.com/XComp/flink/actions/runs/7006143342/job/19057900392#step:12:23945

> Python EmbeddedThreadDependencyTests.test_add_python_file failed on azure
> -
>
> Key: FLINK-26974
> URL: https://issues.apache.org/jira/browse/FLINK-26974
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0, 1.16.0, 1.17.0, 1.19.0
>Reporter: Yun Gao
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: auto-deprioritized-major, stale-assigned, test-stability
>
> {code:java}
> Mar 31 10:49:17 === FAILURES 
> ===
> Mar 31 10:49:17 __ 
> EmbeddedThreadDependencyTests.test_add_python_file __
> Mar 31 10:49:17 
> Mar 31 10:49:17 self = 
>  testMethod=test_add_python_file>
> Mar 31 10:49:17 
> Mar 31 10:49:17 def test_add_python_file(self):
> Mar 31 10:49:17 python_file_dir = os.path.join(self.tempdir, 
> "python_file_dir_" + str(uuid.uuid4()))
> Mar 31 10:49:17 os.mkdir(python_file_dir)
> Mar 31 10:49:17 python_file_path = os.path.join(python_file_dir, 
> "test_dependency_manage_lib.py")
> Mar 31 10:49:17 with open(python_file_path, 'w') as f:
> Mar 31 10:49:17 f.write("def add_two(a):\nraise 
> Exception('This function should not be called!')")
> Mar 31 10:49:17 self.t_env.add_python_file(python_file_path)
> Mar 31 10:49:17 
> Mar 31 10:49:17 python_file_dir_with_higher_priority = os.path.join(
> Mar 31 10:49:17 self.tempdir, "python_file_dir_" + 
> str(uuid.uuid4()))
> Mar 31 10:49:17 os.mkdir(python_file_dir_with_higher_priority)
> Mar 31 10:49:17 python_file_path_higher_priority = 
> os.path.join(python_file_dir_with_higher_priority,
> Mar 31 10:49:17 
> "test_dependency_manage_lib.py")
> Mar 31 10:49:17 with open(python_file_path_higher_priority, 'w') as f:
> Mar 31 10:49:17 f.write("def add_two(a):\nreturn a + 2")
> Mar 31 10:49:17 
> self.t_env.add_python_file(python_file_path_higher_priority)
> Mar 31 10:49:17 
> Mar 31 10:49:17 def plus_two(i):
> Mar 31 10:49:17 from test_dependency_manage_lib import add_two
> Mar 31 10:49:17 return add_two(i)
> Mar 31 10:49:17 
> Mar 31 10:49:17 self.t_env.create_temporary_system_function(
> Mar 31 10:49:17 "add_two", udf(plus_two, DataTypes.BIGINT(), 
> DataTypes.BIGINT()))
> Mar 31 10:49:17 table_sink = source_sink_utils.TestAppendSink(
> Mar 31 10:49:17 ['a', 'b'], [DataTypes.BIGINT(), 
> DataTypes.BIGINT()])
> Mar 31 10:49:17 self.t_env.register_table_sink("Results", table_sink)
> Mar 31 10:49:17 t = self.t_env.from_elements([(1, 2), (2, 5), (3, 
> 1)], ['a', 'b'])
> Mar 31 10:49:17 >   t.select(expr.call("add_two", t.a), 
> t.a).execute_insert("Results").wait()
> Mar 31 10:49:17 
> Mar 31 10:49:17 pyflink/table/tests/test_dependency.py:63: 
> Mar 31 10:49:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ _ _ _ _ _ _ _ _ 
> Mar 31 10:49:17 pyflink/table/table_result.py:76: in wait
> Mar 31 10:49:17 get_method(self._j_table_result, "await")()
> Mar 31 10:49:17 
> .tox/py38-cython/lib/python3.8/site-packages/py4j/java_gateway.py:1321: in 
> __call__
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=34001=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=27239



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26974) Python EmbeddedThreadDependencyTests.test_add_python_file failed on azure

2023-11-27 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790408#comment-17790408
 ] 

Matthias Pohl commented on FLINK-26974:
---

https://github.com/XComp/flink/actions/runs/7006143342/job/19057900392#step:12:23945

> Python EmbeddedThreadDependencyTests.test_add_python_file failed on azure
> -
>
> Key: FLINK-26974
> URL: https://issues.apache.org/jira/browse/FLINK-26974
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0, 1.16.0, 1.17.0, 1.19.0
>Reporter: Yun Gao
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: auto-deprioritized-major, stale-assigned, test-stability
>
> {code:java}
> Mar 31 10:49:17 === FAILURES 
> ===
> Mar 31 10:49:17 __ 
> EmbeddedThreadDependencyTests.test_add_python_file __
> Mar 31 10:49:17 
> Mar 31 10:49:17 self = 
>  testMethod=test_add_python_file>
> Mar 31 10:49:17 
> Mar 31 10:49:17 def test_add_python_file(self):
> Mar 31 10:49:17 python_file_dir = os.path.join(self.tempdir, 
> "python_file_dir_" + str(uuid.uuid4()))
> Mar 31 10:49:17 os.mkdir(python_file_dir)
> Mar 31 10:49:17 python_file_path = os.path.join(python_file_dir, 
> "test_dependency_manage_lib.py")
> Mar 31 10:49:17 with open(python_file_path, 'w') as f:
> Mar 31 10:49:17 f.write("def add_two(a):\nraise 
> Exception('This function should not be called!')")
> Mar 31 10:49:17 self.t_env.add_python_file(python_file_path)
> Mar 31 10:49:17 
> Mar 31 10:49:17 python_file_dir_with_higher_priority = os.path.join(
> Mar 31 10:49:17 self.tempdir, "python_file_dir_" + 
> str(uuid.uuid4()))
> Mar 31 10:49:17 os.mkdir(python_file_dir_with_higher_priority)
> Mar 31 10:49:17 python_file_path_higher_priority = 
> os.path.join(python_file_dir_with_higher_priority,
> Mar 31 10:49:17 
> "test_dependency_manage_lib.py")
> Mar 31 10:49:17 with open(python_file_path_higher_priority, 'w') as f:
> Mar 31 10:49:17 f.write("def add_two(a):\nreturn a + 2")
> Mar 31 10:49:17 
> self.t_env.add_python_file(python_file_path_higher_priority)
> Mar 31 10:49:17 
> Mar 31 10:49:17 def plus_two(i):
> Mar 31 10:49:17 from test_dependency_manage_lib import add_two
> Mar 31 10:49:17 return add_two(i)
> Mar 31 10:49:17 
> Mar 31 10:49:17 self.t_env.create_temporary_system_function(
> Mar 31 10:49:17 "add_two", udf(plus_two, DataTypes.BIGINT(), 
> DataTypes.BIGINT()))
> Mar 31 10:49:17 table_sink = source_sink_utils.TestAppendSink(
> Mar 31 10:49:17 ['a', 'b'], [DataTypes.BIGINT(), 
> DataTypes.BIGINT()])
> Mar 31 10:49:17 self.t_env.register_table_sink("Results", table_sink)
> Mar 31 10:49:17 t = self.t_env.from_elements([(1, 2), (2, 5), (3, 
> 1)], ['a', 'b'])
> Mar 31 10:49:17 >   t.select(expr.call("add_two", t.a), 
> t.a).execute_insert("Results").wait()
> Mar 31 10:49:17 
> Mar 31 10:49:17 pyflink/table/tests/test_dependency.py:63: 
> Mar 31 10:49:17 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ _ _ _ _ _ _ _ _ 
> Mar 31 10:49:17 pyflink/table/table_result.py:76: in wait
> Mar 31 10:49:17 get_method(self._j_table_result, "await")()
> Mar 31 10:49:17 
> .tox/py38-cython/lib/python3.8/site-packages/py4j/java_gateway.py:1321: in 
> __call__
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=34001=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=27239



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33500) Run storing the JobGraph an asynchronous operation

2023-11-27 Thread zhengzhili (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790406#comment-17790406
 ] 

zhengzhili edited comment on FLINK-33500 at 11/28/23 7:40 AM:
--

Is just using CompletableFuture to run the JobGraphWriter#putJobGraph method? I 
could fix it. Please assign to me.


was (Author: JIRAUSER302860):
If I just used CompletableFuture to run the JobGraphWriter#putJobGraph method, 
I could fix it. Please assign to me.

> Run storing the JobGraph an asynchronous operation
> --
>
> Key: FLINK-33500
> URL: https://issues.apache.org/jira/browse/FLINK-33500
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>
> Currently, submitting a job starts with storing the JobGraph (in HA setups) 
> in the {{{}JobGraphStore{}}}. This includes writing the file to S3 (or some 
> other remote file system). The job submission is done in the 
> {{{}Dispatcher{}}}'s main thread. If writing the {{JobGraph}} is slow, it 
> would block any other operation on the {{{}Dispatcher{}}}. See 
> [Dispatcher#persistAndRunJob|https://github.com/apache/flink/blob/52cbeb90f32ca36c59590df1daa6748995c9b7f8/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java#L645]
>  as code reference.
> This Jira issue is about moving the job submission into the {{ioExecutor}} as 
> an asynchronous call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33500) Run storing the JobGraph an asynchronous operation

2023-11-27 Thread zhengzhili (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790406#comment-17790406
 ] 

zhengzhili commented on FLINK-33500:


If I just used CompletableFuture to run the JobGraphWriter#putJobGraph method, 
I could fix it. Please assign to me.

> Run storing the JobGraph an asynchronous operation
> --
>
> Key: FLINK-33500
> URL: https://issues.apache.org/jira/browse/FLINK-33500
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>
> Currently, submitting a job starts with storing the JobGraph (in HA setups) 
> in the {{{}JobGraphStore{}}}. This includes writing the file to S3 (or some 
> other remote file system). The job submission is done in the 
> {{{}Dispatcher{}}}'s main thread. If writing the {{JobGraph}} is slow, it 
> would block any other operation on the {{{}Dispatcher{}}}. See 
> [Dispatcher#persistAndRunJob|https://github.com/apache/flink/blob/52cbeb90f32ca36c59590df1daa6748995c9b7f8/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java#L645]
>  as code reference.
> This Jira issue is about moving the job submission into the {{ioExecutor}} as 
> an asynchronous call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-26694][table] Support lookup join via a multi-level inheritance of TableFunction [flink]

2023-11-27 Thread via GitHub


xuyangzhong commented on PR #23684:
URL: https://github.com/apache/flink/pull/23684#issuecomment-1829254331

   BTW, the CI seems failed. Can you take a look @YesOrNo828 ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33652]: Fixing hyperlink on first steps page. concepts to concepts/overview, as concepts page is empty. [flink]

2023-11-27 Thread via GitHub


flinkbot commented on PR #23815:
URL: https://github.com/apache/flink/pull/23815#issuecomment-1829229985

   
   ## CI report:
   
   * 9d31a232c911482a75ddb554374d6bed29fda81d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33652) First Steps documentation is having empty page link

2023-11-27 Thread Pranav Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790386#comment-17790386
 ] 

Pranav Sharma commented on FLINK-33652:
---

Hi [~Wencong Liu] , I have raised a PR [GitHub Pull Request 
#23815|https://github.com/apache/flink/pull/23815].

Thanks!

> First Steps documentation is having empty page link
> ---
>
> Key: FLINK-33652
> URL: https://issues.apache.org/jira/browse/FLINK-33652
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
> Environment: Web
>Reporter: Pranav Sharma
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2023-11-26-15-23-02-007.png, 
> image-2023-11-26-15-25-04-708.png
>
>
>  
> Under this page URL 
> [link|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/try-flink/local_installation/],
>  under "Summary" heading, the "concepts" link is pointing to an empty page 
> [link_on_concepts|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/concepts/].
>  Upon visiting, the tab heading contains HTML as well. (Attached screenshots)
> It may be pointed to concepts/overview instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33652) First Steps documentation is having empty page link

2023-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33652:
---
Labels: pull-request-available  (was: )

> First Steps documentation is having empty page link
> ---
>
> Key: FLINK-33652
> URL: https://issues.apache.org/jira/browse/FLINK-33652
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
> Environment: Web
>Reporter: Pranav Sharma
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2023-11-26-15-23-02-007.png, 
> image-2023-11-26-15-25-04-708.png
>
>
>  
> Under this page URL 
> [link|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/try-flink/local_installation/],
>  under "Summary" heading, the "concepts" link is pointing to an empty page 
> [link_on_concepts|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/concepts/].
>  Upon visiting, the tab heading contains HTML as well. (Attached screenshots)
> It may be pointed to concepts/overview instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33652]: Fixing hyperlink on first steps page. concepts to concepts/overview, as concepts page is empty. [flink]

2023-11-27 Thread via GitHub


phraniiac opened a new pull request, #23815:
URL: https://github.com/apache/flink/pull/23815

   
   
   
   
   ## What is the purpose of the change
   
   Correctly pointing the concepts hyperlink under first steps page.
   
   
   ## Brief change log
 - Modified the hyperlink from concepts to concepts/overview
   
   ## Verifying this change
   
   Locally tested.
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-32269) CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-32269:
--

Assignee: Jiabao Sun

> CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP
> 
>
> Key: FLINK-32269
> URL: https://issues.apache.org/jira/browse/FLINK-32269
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49532=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=15797
> {noformat}
> Jun 01 03:40:51 03:40:51.881 [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 104.874 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase
> Jun 01 03:40:51 03:40:51.881 [ERROR] 
> CreateTableAsITCase.testCreateTableAsInStatementSet  Time elapsed: 40.729 s  
> <<< FAILURE!
> Jun 01 03:40:51 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase.testCreateTableAsInStatementSet(CreateTableAsITCase.java:50)
> Jun 01 03:40:51   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 01 03:40:51   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 01 03:40:51   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 01 03:40:51   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32269) CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-32269.

Fix Version/s: 1.19.0
   Resolution: Fixed

Fixed in master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

> CreateTableAsITCase.testCreateTableAsInStatementSet fails on AZP
> 
>
> Key: FLINK-32269
> URL: https://issues.apache.org/jira/browse/FLINK-32269
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.19.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49532=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=15797
> {noformat}
> Jun 01 03:40:51 03:40:51.881 [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 104.874 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase
> Jun 01 03:40:51 03:40:51.881 [ERROR] 
> CreateTableAsITCase.testCreateTableAsInStatementSet  Time elapsed: 40.729 s  
> <<< FAILURE!
> Jun 01 03:40:51 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Jun 01 03:40:51   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Jun 01 03:40:51   at 
> org.apache.flink.table.sql.codegen.CreateTableAsITCase.testCreateTableAsInStatementSet(CreateTableAsITCase.java:50)
> Jun 01 03:40:51   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 01 03:40:51   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 01 03:40:51   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 01 03:40:51   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31033) UsingRemoteJarITCase.testUdfInRemoteJar failed with assertion

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-31033.

Fix Version/s: 1.19.0
   Resolution: Fixed

Fixed in master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

> UsingRemoteJarITCase.testUdfInRemoteJar failed with assertion
> -
>
> Key: FLINK-31033
> URL: https://issues.apache.org/jira/browse/FLINK-31033
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.17.0, 1.16.1
>Reporter: Matthias Pohl
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.19.0
>
>
> {{UsingRemoteJarITCase.testUdfInRemoteJar}} failed with assertion:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46009=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=160c9ae5-96fd-516e-1c91-deb81f59292a=18050
> {code}
> Feb 10 15:28:15 [ERROR] Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 249.499 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.UsingRemoteJarITCase
> Feb 10 15:28:15 [ERROR] UsingRemoteJarITCase.testUdfInRemoteJar  Time 
> elapsed: 40.786 s  <<< FAILURE!
> Feb 10 15:28:15 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40)
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.UsingRemoteJarITCase.testUdfInRemoteJar(UsingRemoteJarITCase.java:106)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31141) CreateTableAsITCase.testCreateTableAs fails

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-31141.

Fix Version/s: 1.19.0
   Resolution: Fixed

Fixed in master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

> CreateTableAsITCase.testCreateTableAs fails
> ---
>
> Key: FLINK-31141
> URL: https://issues.apache.org/jira/browse/FLINK-31141
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Rui Fan
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: stale-assigned, test-stability
> Fix For: 1.19.0
>
>
> CreateTableAsITCase.testCreateTableAs fails in 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46323=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=160c9ae5-96fd-516e-1c91-deb81f59292a=14772]
>  
> {code:java}
> Feb 20 13:50:12 [ERROR] Failures: 
> Feb 20 13:50:12 [ERROR] CreateTableAsITCase.testCreateTableAs
> Feb 20 13:50:12 [ERROR]   Run 1: Did not get expected results before timeout, 
> actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Feb 20 13:50:12 [INFO]   Run 2: PASS
> Feb 20 13:50:12 [INFO] 
> Feb 20 13:50:12 [INFO] 
> Feb 20 13:50:12 [ERROR] Tests run: 15, Failures: 1, Errors: 0, Skipped: 0 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31141) CreateTableAsITCase.testCreateTableAs fails

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-31141:
--

Assignee: Jiabao Sun  (was: dalongliu)

> CreateTableAsITCase.testCreateTableAs fails
> ---
>
> Key: FLINK-31141
> URL: https://issues.apache.org/jira/browse/FLINK-31141
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Rui Fan
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> CreateTableAsITCase.testCreateTableAs fails in 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46323=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=160c9ae5-96fd-516e-1c91-deb81f59292a=14772]
>  
> {code:java}
> Feb 20 13:50:12 [ERROR] Failures: 
> Feb 20 13:50:12 [ERROR] CreateTableAsITCase.testCreateTableAs
> Feb 20 13:50:12 [ERROR]   Run 1: Did not get expected results before timeout, 
> actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Feb 20 13:50:12 [INFO]   Run 2: PASS
> Feb 20 13:50:12 [INFO] 
> Feb 20 13:50:12 [INFO] 
> Feb 20 13:50:12 [ERROR] Tests run: 15, Failures: 1, Errors: 0, Skipped: 0 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31033) UsingRemoteJarITCase.testUdfInRemoteJar failed with assertion

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-31033:
--

Assignee: Jiabao Sun

> UsingRemoteJarITCase.testUdfInRemoteJar failed with assertion
> -
>
> Key: FLINK-31033
> URL: https://issues.apache.org/jira/browse/FLINK-31033
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.17.0, 1.16.1
>Reporter: Matthias Pohl
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
>
> {{UsingRemoteJarITCase.testUdfInRemoteJar}} failed with assertion:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46009=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=160c9ae5-96fd-516e-1c91-deb81f59292a=18050
> {code}
> Feb 10 15:28:15 [ERROR] Tests run: 10, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 249.499 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.UsingRemoteJarITCase
> Feb 10 15:28:15 [ERROR] UsingRemoteJarITCase.testUdfInRemoteJar  Time 
> elapsed: 40.786 s  <<< FAILURE!
> Feb 10 15:28:15 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40)
> Feb 10 15:28:15   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Feb 10 15:28:15   at 
> org.apache.flink.table.sql.codegen.UsingRemoteJarITCase.testUdfInRemoteJar(UsingRemoteJarITCase.java:106)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31339) PlannerScalaFreeITCase.testImperativeUdaf

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-31339.

Resolution: Fixed

Fixed in master(1.19): 63996b5c7fe15d792e6a74d5323b008b9a762b52

> PlannerScalaFreeITCase.testImperativeUdaf
> -
>
> Key: FLINK-31339
> URL: https://issues.apache.org/jira/browse/FLINK-31339
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.19.0
>
>
> {{PlannerScalaFreeITCase.testImperativeUdaf}} failed:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46812=logs=87489130-75dc-54e4-1f45-80c30aa367a3=73da6d75-f30d-5d5a-acbe-487a9dcff678=15012
> {code}
> Mar 05 05:55:50 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 62.028 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.PlannerScalaFreeITCase
> Mar 05 05:55:50 [ERROR] PlannerScalaFreeITCase.testImperativeUdaf  Time 
> elapsed: 40.924 s  <<< FAILURE!
> Mar 05 05:55:50 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.PlannerScalaFreeITCase.testImperativeUdaf(PlannerScalaFreeITCase.java:43)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31339) PlannerScalaFreeITCase.testImperativeUdaf

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-31339:
---
Fix Version/s: 1.19.0

> PlannerScalaFreeITCase.testImperativeUdaf
> -
>
> Key: FLINK-31339
> URL: https://issues.apache.org/jira/browse/FLINK-31339
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.19.0
>
>
> {{PlannerScalaFreeITCase.testImperativeUdaf}} failed:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46812=logs=87489130-75dc-54e4-1f45-80c30aa367a3=73da6d75-f30d-5d5a-acbe-487a9dcff678=15012
> {code}
> Mar 05 05:55:50 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 62.028 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.PlannerScalaFreeITCase
> Mar 05 05:55:50 [ERROR] PlannerScalaFreeITCase.testImperativeUdaf  Time 
> elapsed: 40.924 s  <<< FAILURE!
> Mar 05 05:55:50 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.PlannerScalaFreeITCase.testImperativeUdaf(PlannerScalaFreeITCase.java:43)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31339][tests] Fix unstable tests of flink-end-to-end-tests-sql module [flink]

2023-11-27 Thread via GitHub


leonardBang merged PR #23507:
URL: https://github.com/apache/flink/pull/23507


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33669) Update the documentation for RestartStrategy, Checkpoint Storage, and State Backend.

2023-11-27 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-33669:
---
Affects Version/s: 1.19.0

> Update the documentation for RestartStrategy, Checkpoint Storage, and State 
> Backend.
> 
>
> Key: FLINK-33669
> URL: https://issues.apache.org/jira/browse/FLINK-33669
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Documentation
>Affects Versions: 1.19.0
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>
> After the deprecation of complex Java object getter and setter methods in 
> FLIP-381, Flink now recommends the use of ConfigOptions for the configuration 
> of RestartStrategy, Checkpoint Storage, and State Backend. It is necessary 
> that we update FLINK documentation to clearly instruct users on this new 
> recommended approach.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33669) Update the documentation for RestartStrategy, Checkpoint Storage, and State Backend.

2023-11-27 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-33669:
--

Assignee: Junrui Li

> Update the documentation for RestartStrategy, Checkpoint Storage, and State 
> Backend.
> 
>
> Key: FLINK-33669
> URL: https://issues.apache.org/jira/browse/FLINK-33669
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Documentation
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>
> After the deprecation of complex Java object getter and setter methods in 
> FLIP-381, Flink now recommends the use of ConfigOptions for the configuration 
> of RestartStrategy, Checkpoint Storage, and State Backend. It is necessary 
> that we update FLINK documentation to clearly instruct users on this new 
> recommended approach.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33670) Public operators cannot be reused in multi sinks

2023-11-27 Thread Lyn Zhang (Jira)
Lyn Zhang created FLINK-33670:
-

 Summary: Public operators cannot be reused in multi sinks
 Key: FLINK-33670
 URL: https://issues.apache.org/jira/browse/FLINK-33670
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: Lyn Zhang
 Attachments: image-2023-11-28-14-31-30-153.png

Dear all:

   I find that some public operators cannot be reused when submit a job with 
multi sinks. I have an example as follows:
{code:java}
CREATE TABLE source (
    id              STRING,
    ts              TIMESTAMP(3),
    v              BIGINT,
    WATERMARK FOR ts AS ts - INTERVAL '3' SECOND
) WITH (
    'connector' = 'socket',
    'hostname' = 'localhost',
    'port' = '',
    'byte-delimiter' = '10',
    'format' = 'json'
);
CREATE VIEW source_distinct AS
SELECT * FROM (
    SELECT *, ROW_NUMBER() OVER w AS row_nu
    FROM source
    WINDOW w AS (PARTITION BY id ORDER BY proctime() ASC)
) WHERE row_nu = 1;
CREATE TABLE print1 (
     id             STRING,
     ts             TIMESTAMP(3)
) WITH('connector' = 'blackhole');
INSERT INTO print1 SELECT id, ts FROM source_distinct;
CREATE TABLE print2 (
     id              STRING,
     ts              TIMESTAMP(3),
     v               BIGINT
) WITH('connector' = 'blackhole');
INSERT INTO print2
SELECT id, TUMBLE_START(ts, INTERVAL '20' SECOND), SUM(v)
FROM source_distinct
GROUP BY TUMBLE(ts, INTERVAL '20' SECOND), id; {code}
!image-2023-11-28-14-31-30-153.png|width=384,height=145!

I try to check the code, Flink add the rule of CoreRules.PROJECT_MERGE by 
default, This will create different rel digests of  the deduplicate operator 
and finally cause match common operators fail.

In real production environment, Reuse common operators like deduplicate is more 
worthy than project merge. A good solution is to interrupt the project merge 
cross shuffle operators in multi sinks cases. 

How did you consider it? Looking forward to your reply.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31339) PlannerScalaFreeITCase.testImperativeUdaf

2023-11-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-31339:
--

Assignee: Jiabao Sun

> PlannerScalaFreeITCase.testImperativeUdaf
> -
>
> Key: FLINK-31339
> URL: https://issues.apache.org/jira/browse/FLINK-31339
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
>
> {{PlannerScalaFreeITCase.testImperativeUdaf}} failed:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46812=logs=87489130-75dc-54e4-1f45-80c30aa367a3=73da6d75-f30d-5d5a-acbe-487a9dcff678=15012
> {code}
> Mar 05 05:55:50 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 62.028 s <<< FAILURE! - in 
> org.apache.flink.table.sql.codegen.PlannerScalaFreeITCase
> Mar 05 05:55:50 [ERROR] PlannerScalaFreeITCase.testImperativeUdaf  Time 
> elapsed: 40.924 s  <<< FAILURE!
> Mar 05 05:55:50 org.opentest4j.AssertionFailedError: Did not get expected 
> results before timeout, actual result: 
> [{"before":null,"after":{"user_name":"Bob","order_cnt":1},"op":"c"}, 
> {"before":null,"after":{"user_name":"Alice","order_cnt":1},"op":"c"}, 
> {"before":{"user_name":"Bob","order_cnt":1},"after":null,"op":"d"}, 
> {"before":null,"after":{"user_name":"Bob","order_cnt":2},"op":"c"}]. ==> 
> expected:  but was: 
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> Mar 05 05:55:50   at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.checkJsonResultFile(SqlITCaseBase.java:168)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.SqlITCaseBase.runAndCheckSQL(SqlITCaseBase.java:111)
> Mar 05 05:55:50   at 
> org.apache.flink.table.sql.codegen.PlannerScalaFreeITCase.testImperativeUdaf(PlannerScalaFreeITCase.java:43)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33669) Update the documentation for RestartStrategy, Checkpoint Storage, and State Backend.

2023-11-27 Thread Junrui Li (Jira)
Junrui Li created FLINK-33669:
-

 Summary: Update the documentation for RestartStrategy, Checkpoint 
Storage, and State Backend.
 Key: FLINK-33669
 URL: https://issues.apache.org/jira/browse/FLINK-33669
 Project: Flink
  Issue Type: Technical Debt
  Components: Documentation
Reporter: Junrui Li


After the deprecation of complex Java object getter and setter methods in 
FLIP-381, Flink now recommends the use of ConfigOptions for the configuration 
of RestartStrategy, Checkpoint Storage, and State Backend. It is necessary that 
we update FLINK documentation to clearly instruct users on this new recommended 
approach.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31463) When I use apache flink1.12.2 version, the following akka error often occurs.

2023-11-27 Thread Xinglong Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790377#comment-17790377
 ] 

Xinglong Wang commented on FLINK-31463:
---

[~martijnvisser] How to solve this question?

> When I use apache flink1.12.2 version, the following akka error often occurs.
> -
>
> Key: FLINK-31463
> URL: https://issues.apache.org/jira/browse/FLINK-31463
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.15.4
>Reporter: Zhuang Liu 
>Priority: Major
>
> When I use apache flink1.12.2 version, the following akka error often occurs.
> java.util.concurrent.TimeoutException: Remote system has been silent for too
> long. (more than 48.0 hours)
> at
> akka.remote.ReliableDeliverySupervisor$$anonfun$idle$1.applyOrElse(Endpoint.scala:375)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
> at akka.remote.ReliableDeliverySupervisor.aroundReceive(Endpoint.scala:203)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 
> I checked that 48 hours ago, there was indeed a process hang inside flink, 
> and the flink job was restarted.How to deal with this? Is this a bug in akka 
> or flink? Thank you !
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26010) Develop ArchUnit test (infra) for filesystems test code

2023-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26010:
---
Labels: pull-request-available  (was: )

> Develop ArchUnit test (infra) for filesystems test code
> ---
>
> Key: FLINK-26010
> URL: https://issues.apache.org/jira/browse/FLINK-26010
> Project: Flink
>  Issue Type: Improvement
>  Components: Test Infrastructure
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Minor
>  Labels: pull-request-available
>
> ArchUnit test (infra) should be developed for filesystems submodules after 
> the ArchUnit infra for test code has been built.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-26010] Develop archunit test for filesystem test code [flink]

2023-11-27 Thread via GitHub


PatrickRen commented on code in PR #23798:
URL: https://github.com/apache/flink/pull/23798#discussion_r1407290209


##
flink-filesystems/flink-azure-fs-hadoop/pom.xml:
##
@@ -47,6 +47,14 @@ under the License.
provided

 
+   

Review Comment:
   typo `ArchUnit`. Same issue in all poms.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33581) FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2023-11-27 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-33581:
--
Description: Deprecate the non-ConfigOption objects in the 
StreamExecutionEnvironment, CheckpointConfig, and ExecutionConfig, and 
ultimately removing them in FLINK 2.0  (was: Deprecate he non-ConfigOption 
objects in the StreamExecutionEnvironment, CheckpointConfig, and 
ExecutionConfig, and ultimately removing them in FLINK 2.0)

> FLIP-381: Deprecate configuration getters/setters that return/set complex 
> Java objects
> --
>
> Key: FLINK-33581
> URL: https://issues.apache.org/jira/browse/FLINK-33581
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Deprecate the non-ConfigOption objects in the StreamExecutionEnvironment, 
> CheckpointConfig, and ExecutionConfig, and ultimately removing them in FLINK 
> 2.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33581) FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2023-11-27 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-33581.
---
Fix Version/s: 1.19.0
   Resolution: Done

Done via
0e0099b4eb1285929fec02326f661cba899eedcd
139db3f4bc7faed4478393a91a063ad54d15a523
dae2eb5b61f71b9453a73e4f0b3c69fd28f54ebf

> FLIP-381: Deprecate configuration getters/setters that return/set complex 
> Java objects
> --
>
> Key: FLINK-33581
> URL: https://issues.apache.org/jira/browse/FLINK-33581
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / DataStream
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Deprecate he non-ConfigOption objects in the StreamExecutionEnvironment, 
> CheckpointConfig, and ExecutionConfig, and ultimately removing them in FLINK 
> 2.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33581][core] Deprecate configuration getters/setters that return/set complex Java objects. [flink]

2023-11-27 Thread via GitHub


zhuzhurk closed pull request #23758: [FLINK-33581][core] Deprecate 
configuration getters/setters that return/set complex Java objects.
URL: https://github.com/apache/flink/pull/23758


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31339][tests] Fix unstable tests of flink-end-to-end-tests-sql module [flink]

2023-11-27 Thread via GitHub


Jiabao-Sun commented on PR #23507:
URL: https://github.com/apache/flink/pull/23507#issuecomment-1829182205

   > Thanks @Jiabao-Sun and @ruanhang1993 for the contribution , could you 
change the PR description? IIUC, it has been outdated.
   
   Thanks @leonardBang and @ruanhang1993 for the review.
   The PR description has been updated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31339][tests] Fix unstable tests of flink-end-to-end-tests-sql module [flink]

2023-11-27 Thread via GitHub


leonardBang commented on PR #23507:
URL: https://github.com/apache/flink/pull/23507#issuecomment-1829154493

   Thanks @Jiabao-Sun and @ruanhang1993 for the contribution , could you change 
the PR description? IIUC, it has been outdated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add GPG key for 1.17.2 release [flink-docker]

2023-11-27 Thread via GitHub


Myasuka merged PR #167:
URL: https://github.com/apache/flink-docker/pull/167


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Update Dockerfiles for 1.17.2 release [flink-docker]

2023-11-27 Thread via GitHub


Myasuka merged PR #166:
URL: https://github.com/apache/flink-docker/pull/166


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27681) Improve the availability of Flink when the RocksDB file is corrupted.

2023-11-27 Thread Yue Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790359#comment-17790359
 ] 

Yue Ma commented on FLINK-27681:


 
{quote}Fail job directly is fine for me, but I guess the PR doesn't fail the 
job, it just fails the current checkpoint, right?
{quote}
I think it may be used together with the 
{*}execution.checkpointing.tolerable-failed-checkpoints{*}, or generally 
speaking, if it is a high-quality job, users will also pay attention to whether 
the cp production is successful.
{quote}could you provide some simple benchmark here?
{quote}
I did some testing on my local machine. It takes about 60 to 70ms to check a 
64M sst file. Checking a 10GB rocksdb instance takes about 10 seconds. More 
detailed testing may be needed later.


 

> Improve the availability of Flink when the RocksDB file is corrupted.
> -
>
> Key: FLINK-27681
> URL: https://issues.apache.org/jira/browse/FLINK-27681
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Ming Li
>Assignee: Yue Ma
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-08-23-15-06-16-717.png
>
>
> We have encountered several times when the RocksDB checksum does not match or 
> the block verification fails when the job is restored. The reason for this 
> situation is generally that there are some problems with the machine where 
> the task is located, which causes the files uploaded to HDFS to be incorrect, 
> but it has been a long time (a dozen minutes to half an hour) when we found 
> this problem. I'm not sure if anyone else has had a similar problem.
> Since this file is referenced by incremental checkpoints for a long time, 
> when the maximum number of checkpoints reserved is exceeded, we can only use 
> this file until it is no longer referenced. When the job failed, it cannot be 
> recovered.
> Therefore we consider:
> 1. Can RocksDB periodically check whether all files are correct and find the 
> problem in time?
> 2. Can Flink automatically roll back to the previous checkpoint when there is 
> a problem with the checkpoint data, because even with manual intervention, it 
> just tries to recover from the existing checkpoint or discard the entire 
> state.
> 3. Can we increase the maximum number of references to a file based on the 
> maximum number of checkpoints reserved? When the number of references exceeds 
> the maximum number of checkpoints -1, the Task side is required to upload a 
> new file for this reference. Not sure if this way will ensure that the new 
> file we upload will be correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33597) Can not use a nested column for a join condition

2023-11-27 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790358#comment-17790358
 ] 

lincoln lee commented on FLINK-33597:
-

[~twalthr] IIUC, 
[FLINK-31830|https://issues.apache.org/jira/browse/FLINK-31830] might be what 
you're talking about.  [~qingyue] had a detailed investigation on this issue 
and is preparing a FLIP recently. 

> Can not use a nested column for a join condition
> 
>
> Key: FLINK-33597
> URL: https://issues.apache.org/jira/browse/FLINK-33597
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> Query:
> {code}
> SELECT A.after.CUSTOMER_ID FROM `CUSTOMERS` A INNER JOIN `PRODUCTS` B ON 
> A.after.CUSTOMER_ID = B.after.PURCHASER;
> {code}
> fails with:
> {code}
> java.lang.RuntimeException: Error while applying rule 
> FlinkProjectWatermarkAssignerTransposeRule, args 
> [rel#411017:LogicalProject.NONE.any.None: 
> 0.[NONE].[NONE](input=RelSubset#411016,exprs=[$2, $2.CUSTOMER_ID]), 
> rel#411015:LogicalWatermarkAssigner.NONE.any.None: 
> 0.[NONE].[NONE](input=RelSubset#411014,rowtime=$rowtime,watermark=SOURCE_WATERMARK())]
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:250)
> ...
> Caused by: java.lang.IllegalArgumentException: Type mismatch:
> rel rowtype: RecordType(RecordType:peek_no_expand(INTEGER NOT NULL 
> CUSTOMER_ID, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" CUSTOMER_NAME, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" TITLE, INTEGER DOB) after, 
> INTEGER $f8) NOT NULL
> equiv rowtype: RecordType(RecordType:peek_no_expand(INTEGER NOT NULL 
> CUSTOMER_ID, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" CUSTOMER_NAME, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" TITLE, INTEGER DOB) after, 
> INTEGER NOT NULL $f8) NOT NULL
> Difference:
> $f8: INTEGER -> INTEGER NOT NULL
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:592)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:613)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:144)
>   ... 50 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] FLINK-33600][table] print the query time cost for batch query in cli [flink]

2023-11-27 Thread via GitHub


fsk119 commented on code in PR #23809:
URL: https://github.com/apache/flink/pull/23809#discussion_r1406993946


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliTableauResultView.java:
##
@@ -128,8 +128,13 @@ private void printBatchResults(AtomicInteger 
receivedRowCount) {
 resultDescriptor.getRowDataStringConverter(),
 resultDescriptor.maxColumnWidth(),
 false,
-false);
-style.print(resultRows.iterator(), terminal.writer());
+false,
+resultDescriptor.isPrintQueryTimeCost());
+style.print(resultRows.iterator(), terminal.writer(), 
getQueryBeginTime());
+}
+
+long getQueryBeginTime() {
+return System.currentTimeMillis();

Review Comment:
   If we get the time here, I think we don't take the submission time into the 
consideration.



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/print/TableauStyle.java:
##
@@ -92,12 +95,14 @@ public final class TableauStyle implements PrintStyle {
 int[] columnWidths,
 int maxColumnWidth,
 boolean printNullAsEmpty,
-boolean printRowKind) {
+boolean printRowKind,
+boolean printQueryTimeCost) {

Review Comment:
   `printQueryTimeCost` relies on the input parameter of the `print` method. So 
I just think whether it's better don't introduce this parameter here?



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/print/TableauStyle.java:
##
@@ -117,6 +122,10 @@ public final class TableauStyle implements PrintStyle {
 
 @Override
 public void print(Iterator it, PrintWriter printWriter) {
+print(it, printWriter, -1);

Review Comment:
   If -1 is a magic number, it's better to introduce a special variable to 
represent its meaning. 
   



##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/TableConfigOptions.java:
##
@@ -125,6 +126,14 @@ private TableConfigOptions() {}
 + "This only applies to columns with 
variable-length types (e.g. CHAR, VARCHAR, STRING) in the streaming mode. "
 + "Fixed-length types are printed in the 
batch mode using a deterministic column width.");
 
+@Documentation.TableOption(execMode = Documentation.ExecMode.BATCH)
+public static final ConfigOption DISPLAY_QUERY_TIME_COST =
+ConfigOptions.key("table.display.query-time-cost")

Review Comment:
   I think we introduce an option that works for the sql client only? If so, I 
perfer to move it into the sql client options right now.
   
   BTW, I think it also influence other SQL statement behaviour. What about 
`sql-client.display.print-time-cost`?



##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/TableConfigOptions.java:
##
@@ -125,6 +126,14 @@ private TableConfigOptions() {}
 + "This only applies to columns with 
variable-length types (e.g. CHAR, VARCHAR, STRING) in the streaming mode. "
 + "Fixed-length types are printed in the 
batch mode using a deterministic column width.");
 
+@Documentation.TableOption(execMode = Documentation.ExecMode.BATCH)
+public static final ConfigOption DISPLAY_QUERY_TIME_COST =
+ConfigOptions.key("table.display.query-time-cost")
+.booleanType()
+.defaultValue(false)

Review Comment:
   I think we mark the default value true here. Becaue Hive or Presto both 
prints time cost by default.
   
   You can refer to 
[hive](https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java#L86)
 for more details



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/print/TableauStyle.java:
##
@@ -117,6 +122,10 @@ public final class TableauStyle implements PrintStyle {
 
 @Override
 public void print(Iterator it, PrintWriter printWriter) {
+print(it, printWriter, -1);
+}
+
+public void print(Iterator it, PrintWriter printWriter, long 
queryBeginTime) {

Review Comment:
   Do we need to modify this here. Please take a look at 
`CliTableauResultView`. SQL Client controls the print style by itself in 
streaming mode. 
   
   
![image](https://github.com/apache/flink/assets/33114724/3908fe91-675e-42ee-8bbc-d7a0b5e1936d)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33668) Decoupling Shuffle network memory and job topology

2023-11-27 Thread Jiang Xin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiang Xin updated FLINK-33668:
--
Description: 
With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network memory 
and the parallelism of tasks by limiting the number of buffers for each 
InputGate and ResultPartition. However, when too many shuffle tasks are running 
simultaneously on the same TaskManager, "Insufficient number of network 
buffers" errors would still occur. This usually happens when Slot Sharing Group 
is enabled or a TaskManager contains multiple slots.

We want to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously.

  was:
With [FLINK-30469|https://issues.apache.org/jira/browse/FLINK-30469]  and 
[FLINK-31643|https://issues.apache.org/jira/browse/FLINK-31643], we have 
decoupled the shuffle network memory and the parallelism of tasks by limiting 
the number of buffers for each InputGate and ResultPartition. However, when too 
many shuffle tasks are running simultaneously on the same TaskManager, 
"Insufficient number of network buffers" errors would still occur. This usually 
happens when Slot Sharing Group is enabled or a TaskManager contains multiple 
slots.

So we need to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously.


> Decoupling Shuffle network memory and job topology
> --
>
> Key: FLINK-33668
> URL: https://issues.apache.org/jira/browse/FLINK-33668
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Jiang Xin
>Priority: Major
> Fix For: 1.19.0
>
>
> With FLINK-30469  and FLINK-31643, we have decoupled the shuffle network 
> memory and the parallelism of tasks by limiting the number of buffers for 
> each InputGate and ResultPartition. However, when too many shuffle tasks are 
> running simultaneously on the same TaskManager, "Insufficient number of 
> network buffers" errors would still occur. This usually happens when Slot 
> Sharing Group is enabled or a TaskManager contains multiple slots.
> We want to make sure that the TaskManager does not encounter "Insufficient 
> number of network buffers" even if there are dozens of InputGates and 
> ResultPartitions running on the same TaskManager simultaneously.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33643) Allow StreamExecutionEnvironment's executeAsync API to use default JobName

2023-11-27 Thread Weihua Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weihua Hu resolved FLINK-33643.
---
Fix Version/s: 1.19.0
   Resolution: Fixed

resolved in master: 220287539854c8f32f9d85541026f2b018bd5ef3

> Allow StreamExecutionEnvironment's executeAsync API to use default JobName
> --
>
> Key: FLINK-33643
> URL: https://issues.apache.org/jira/browse/FLINK-33643
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.19.0
>Reporter: Matt Wang
>Assignee: Matt Wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> On the `execute` API of StreamExecutionEnvironment, jobName is allowed to be 
> Null. In this case, the default jobName in StreamGraphGenerator 
> (`DEFAULT_STREAMING_JOB_NAME` or `DEFAULT_BATCH_JOB_NAME`) will be used, but 
> the logic of `executeAsync` does not allow jobName to be Null. I think the 
> processing logic should be unified here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33668) Decoupling Shuffle network memory and job topology

2023-11-27 Thread Jiang Xin (Jira)
Jiang Xin created FLINK-33668:
-

 Summary: Decoupling Shuffle network memory and job topology
 Key: FLINK-33668
 URL: https://issues.apache.org/jira/browse/FLINK-33668
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Reporter: Jiang Xin
 Fix For: 1.19.0


With [FLINK-30469|https://issues.apache.org/jira/browse/FLINK-30469]  and 
[FLINK-31643|https://issues.apache.org/jira/browse/FLINK-31643], we have 
decoupled the shuffle network memory and the parallelism of tasks by limiting 
the number of buffers for each InputGate and ResultPartition. However, when too 
many shuffle tasks are running simultaneously on the same TaskManager, 
"Insufficient number of network buffers" errors would still occur. This usually 
happens when Slot Sharing Group is enabled or a TaskManager contains multiple 
slots.

So we need to make sure that the TaskManager does not encounter "Insufficient 
number of network buffers" even if there are dozens of InputGates and 
ResultPartitions running on the same TaskManager simultaneously.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33643][runtime] Allow StreamExecutionEnvironment's executeAsync API to use default JobName [flink]

2023-11-27 Thread via GitHub


huwh merged PR #23794:
URL: https://github.com/apache/flink/pull/23794


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-27 Thread via GitHub


chucheng92 commented on PR #23612:
URL: https://github.com/apache/flink/pull/23612#issuecomment-1828958938

   > This is looking pretty good. Since implementing FLIP-297 is assigned to 
@chucheng92, we should get his input as well!
   > 
   > @jeyhunkarimov have you signed up for the Flink JIRA? You may need to 
request access on the Flink dev list. (I tried to find instructions, but I am 
not seeing them immediately.)
   
   @jeyhunkarimov good work! Currently i'm busy with other work, feel free to 
assign to yourself.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-30400][build] Stop bundling flink-connector-base [flink-connector-pulsar]

2023-11-27 Thread via GitHub


ruanhang1993 commented on PR #61:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/61#issuecomment-1828957321

   @leonardBang Thanks for review. I have rebased the main branch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33466) Bounded Kafka source never finishes after restore from savepoint

2023-11-27 Thread Jonas Weile (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonas Weile updated FLINK-33466:

Component/s: Connectors / Kafka

> Bounded Kafka source never finishes after restore from savepoint
> 
>
> Key: FLINK-33466
> URL: https://issues.apache.org/jira/browse/FLINK-33466
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Kafka
>Affects Versions: 1.17.1
>Reporter: Jonas Weile
>Priority: Major
>
> When setting up a bounded Kafka source, if the job is restored from a 
> savepoint before the source has finished, then the Kafka source will never 
> transition to a finished state.
> This seems to be because the noMoreSplitsAssignment variable in the 
> SourceReaderBase class is not snapshotted. Therefore, after restoring from a 
> checkpoint/savepoint, the noMoreSplitsAssignment variable will default to 
> false, and the first condition in the private helper method 
> finishedOrAvailableLater() in the SourceReaderBase class will always evaluate 
> to true.
> Since this originates in the base class, the problem should hold for all 
> source types, not just kafka.
>  
> Would it make sense to snapshot the noMoreSplitsAssigntments variable?
> I would love to take this on as a first task if appropriate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33667) Implement restore tests for MatchRecognize node

2023-11-27 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33667:
--

 Summary: Implement restore tests for MatchRecognize node
 Key: FLINK-33667
 URL: https://issues.apache.org/jira/browse/FLINK-33667
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33470] Implement restore tests for Join node [flink]

2023-11-27 Thread via GitHub


jnh5y commented on PR #23680:
URL: https://github.com/apache/flink/pull/23680#issuecomment-1828467648

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-27 Thread via GitHub


jnh5y commented on PR #23612:
URL: https://github.com/apache/flink/pull/23612#issuecomment-1828465094

   This is looking pretty good.  Since implementing FLIP-297 is assigned to 
@chucheng92, we should get his input as well!
   
   @jeyhunkarimov have you signed up for the Flink JIRA?  You may need to 
request access on the Flink dev list.  (I tried to find instructions, but I am 
not seeing them immediately.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23612:
URL: https://github.com/apache/flink/pull/23612#discussion_r1406632513


##
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/dql/SqlShowDatabases.java:
##
@@ -29,14 +31,55 @@
 import java.util.Collections;
 import java.util.List;
 
-/** SHOW Databases sql call. */
+import static java.util.Objects.requireNonNull;
+
+/**
+ * SHOW Databases sql call. The full syntax for show databases is as 
followings:
+ *
+ * {@code
+ * SHOW DATABASES [ ( FROM | IN ) catalog_name] [ [NOT] (LIKE | ILIKE)
+ *  ] statement
+ * }
+ */
 public class SqlShowDatabases extends SqlCall {
 
 public static final SqlSpecialOperator OPERATOR =
 new SqlSpecialOperator("SHOW DATABASES", SqlKind.OTHER);
 
-public SqlShowDatabases(SqlParserPos pos) {
+private final String preposition;
+private final SqlIdentifier catalogName;
+private final String likeType;
+private final SqlCharStringLiteral likeLiteral;
+private final boolean notLike;
+
+public String[] getCatalog() {
+return catalogName == null || catalogName.names.isEmpty()
+? new String[] {}
+: catalogName.names.toArray(new String[0]);
+}
+
+public SqlShowDatabases(
+SqlParserPos pos,
+String preposition,
+SqlIdentifier catalogName,
+String likeType,
+SqlCharStringLiteral likeLiteral,
+boolean notLike) {
 super(pos);
+this.preposition = preposition;
+this.catalogName =

Review Comment:
   For example, I think it may make sense to check the `SqlIdentifier`  
`catalogName` here to see if it is has more than one part.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23612:
URL: https://github.com/apache/flink/pull/23612#discussion_r1406631017


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/converters/SqlShowDatabasesConverter.java:
##
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.operations.converters;
+
+import org.apache.flink.sql.parser.dql.SqlShowDatabases;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.CatalogManager;
+import org.apache.flink.table.operations.Operation;
+import org.apache.flink.table.operations.ShowDatabasesOperation;
+
+/** A converter for {@link SqlShowDatabases}. */
+public class SqlShowDatabasesConverter implements 
SqlNodeConverter {
+
+@Override
+public Operation convertSqlNode(SqlShowDatabases sqlShowDatabases, 
ConvertContext context) {
+if (sqlShowDatabases.getPreposition() == null) {
+return new ShowDatabasesOperation(
+sqlShowDatabases.getLikeType(),
+sqlShowDatabases.getLikeSqlPattern(),
+sqlShowDatabases.isNotLike());
+} else {
+CatalogManager catalogManager = context.getCatalogManager();
+String[] fullCatalogName = sqlShowDatabases.getCatalog();
+if (fullCatalogName.length > 1) {

Review Comment:
   Two things:
   
   1.  I wonder if this check should be done sooner.
   2. Does the client testing allow for dealing with errors?  If so, adding 
this case to the `catalog_database.q` would be nice.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23612:
URL: https://github.com/apache/flink/pull/23612#discussion_r1406630035


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowDatabasesOperation.java:
##
@@ -20,26 +20,108 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.functions.SqlLikeUtils;
 
+import java.util.Arrays;
+
+import static java.util.Objects.requireNonNull;
 import static 
org.apache.flink.table.api.internal.TableResultUtils.buildStringArrayResult;
 
 /** Operation to describe a SHOW DATABASES statement. */
 @Internal
 public class ShowDatabasesOperation implements ShowOperation {
 
+private final String preposition;
+private final String catalogName;
+private final LikeType likeType;
+private final String likePattern;
+private final boolean notLike;
+
+public ShowDatabasesOperation() {
+// "SHOW DATABASES" command with all options being default
+this.preposition = null;
+this.catalogName = null;
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+
+public ShowDatabasesOperation(String likeType, String likePattern, boolean 
notLike) {
+this.preposition = null;
+this.catalogName = null;
+if (likeType != null) {
+this.likeType = LikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
+public ShowDatabasesOperation(
+String preposition,
+String catalogName,
+String likeType,
+String likePattern,
+boolean notLike) {
+this.preposition = preposition;
+this.catalogName = catalogName;
+if (likeType != null) {
+this.likeType = LikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
 @Override
 public String asSummaryString() {
-return "SHOW DATABASES";
+StringBuilder builder = new StringBuilder();
+builder.append("SHOW DATABASES");
+if (preposition != null) {
+builder.append(String.format(" %s %s", preposition, catalogName));
+}
+if (likeType != null) {
+if (notLike) {
+builder.append(String.format(" NOT %s '%s'", likeType.name(), 
likePattern));
+} else {
+builder.append(String.format(" %s '%s'", likeType.name(), 
likePattern));
+}
+}
+return builder.toString();
 }
 
 @Override
 public TableResultInternal execute(Context ctx) {
+String cName =
+catalogName == null ? 
ctx.getCatalogManager().getCurrentCatalog() : catalogName;
 String[] databases =
-ctx.getCatalogManager()
-
.getCatalogOrThrowException(ctx.getCatalogManager().getCurrentCatalog())
-.listDatabases().stream()
+
ctx.getCatalogManager().getCatalogOrThrowException(cName).listDatabases().stream()
 .sorted()
 .toArray(String[]::new);
+
+if (likeType != null) {
+databases =
+Arrays.stream(databases)

Review Comment:
   You could hold on to the 
`ctx.getCatalogManager().getCatalogOrThrowException(cName).listDatabases().stream()`
 as a stream and only sort it once after you've (potentially) filtered it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31481][table] Support enhanced show databases syntax [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23612:
URL: https://github.com/apache/flink/pull/23612#discussion_r1406627733


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/ShowDatabasesOperation.java:
##
@@ -20,26 +20,108 @@
 
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.table.api.internal.TableResultInternal;
+import org.apache.flink.table.functions.SqlLikeUtils;
 
+import java.util.Arrays;
+
+import static java.util.Objects.requireNonNull;
 import static 
org.apache.flink.table.api.internal.TableResultUtils.buildStringArrayResult;
 
 /** Operation to describe a SHOW DATABASES statement. */
 @Internal
 public class ShowDatabasesOperation implements ShowOperation {
 
+private final String preposition;
+private final String catalogName;
+private final LikeType likeType;
+private final String likePattern;
+private final boolean notLike;
+
+public ShowDatabasesOperation() {
+// "SHOW DATABASES" command with all options being default
+this.preposition = null;
+this.catalogName = null;
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+
+public ShowDatabasesOperation(String likeType, String likePattern, boolean 
notLike) {
+this.preposition = null;
+this.catalogName = null;
+if (likeType != null) {
+this.likeType = LikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
+public ShowDatabasesOperation(
+String preposition,
+String catalogName,
+String likeType,
+String likePattern,
+boolean notLike) {
+this.preposition = preposition;
+this.catalogName = catalogName;
+if (likeType != null) {
+this.likeType = LikeType.of(likeType);
+this.likePattern = requireNonNull(likePattern, "Like pattern must 
not be null");
+this.notLike = notLike;
+} else {
+this.likeType = null;
+this.likePattern = null;
+this.notLike = false;
+}
+}
+
 @Override
 public String asSummaryString() {
-return "SHOW DATABASES";
+StringBuilder builder = new StringBuilder();
+builder.append("SHOW DATABASES");
+if (preposition != null) {
+builder.append(String.format(" %s %s", preposition, catalogName));
+}
+if (likeType != null) {
+if (notLike) {
+builder.append(String.format(" NOT %s '%s'", likeType.name(), 
likePattern));
+} else {
+builder.append(String.format(" %s '%s'", likeType.name(), 
likePattern));
+}
+}
+return builder.toString();
 }
 
 @Override
 public TableResultInternal execute(Context ctx) {
+String cName =
+catalogName == null ? 
ctx.getCatalogManager().getCurrentCatalog() : catalogName;
 String[] databases =
-ctx.getCatalogManager()
-
.getCatalogOrThrowException(ctx.getCatalogManager().getCurrentCatalog())
-.listDatabases().stream()
+
ctx.getCatalogManager().getCatalogOrThrowException(cName).listDatabases().stream()
 .sorted()
 .toArray(String[]::new);
+
+if (likeType != null) {
+databases =
+Arrays.stream(databases)
+.filter(
+row -> {
+if (likeType == LikeType.ILIKE) {
+return notLike
+!= SqlLikeUtils.ilike(row, 
likePattern, "\\");
+} else {

Review Comment:
   I'd be tempted to suggest checking for the other type here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33647] Implement restore tests for LookupJoin node [flink]

2023-11-27 Thread via GitHub


flinkbot commented on PR #23814:
URL: https://github.com/apache/flink/pull/23814#issuecomment-1828441266

   
   ## CI report:
   
   * 40058dbde8aa6676a411809110590296bd40a47c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33647) Implement restore tests for LookupJoin node

2023-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33647:
---
Labels: pull-request-available  (was: )

> Implement restore tests for LookupJoin node
> ---
>
> Key: FLINK-33647
> URL: https://issues.apache.org/jira/browse/FLINK-33647
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33647] Implement restore tests for LookupJoin node [flink]

2023-11-27 Thread via GitHub


bvarghese1 opened a new pull request, #23814:
URL: https://github.com/apache/flink/pull/23814

   
   
   ## What is the purpose of the change
   
   Implement restore tests for LookupJoin node
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - Added restore tests for LookupJoin node which verifies the generated 
compiled plan with the saved compiled plan
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32986][test] Fix createTemporaryFunction type inference error [flink]

2023-11-27 Thread via GitHub


jnh5y commented on PR #23586:
URL: https://github.com/apache/flink/pull/23586#issuecomment-1828429647

   @jeyhunkarimov Thanks for contributing to Flink!  Can you sort out the 
conflict that has happened?
   
   @lincoln-lil may be the best person to review this PR.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33470] Implement restore tests for Join node [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23680:
URL: https://github.com/apache/flink/pull/23680#discussion_r1406606618


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/JoinTestPrograms.java:
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.testutils;
+
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecJoin;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+/** {@link TableTestProgram} definitions for testing {@link StreamExecJoin}. */
+public class JoinTestPrograms {

Review Comment:
   Good catch.  I missed it!



##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/JoinTestPrograms.java:
##
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.testutils;
+
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecJoin;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+/** {@link TableTestProgram} definitions for testing {@link StreamExecJoin}. */
+public class JoinTestPrograms {
+static final TableTestProgram NON_WINDOW_INNER_JOIN;
+static final TableTestProgram NON_WINDOW_INNER_JOIN_WITH_NULL;
+static final TableTestProgram CROSS_JOIN;
+static final TableTestProgram JOIN_WITH_FILTER;
+static final TableTestProgram INNER_JOIN_WITH_DUPLICATE_KEY;
+static final TableTestProgram INNER_JOIN_WITH_NON_EQUI_JOIN;
+static final TableTestProgram INNER_JOIN_WITH_EQUAL_PK;
+static final TableTestProgram INNER_JOIN_WITH_PK;
+static final TableTestProgram LEFT_JOIN;
+static final TableTestProgram FULL_OUTER;
+static final TableTestProgram RIGHT_JOIN;
+static final TableTestProgram SEMI_JOIN;
+static final TableTestProgram ANTI_JOIN;
+
+static final SourceTestStep EMPLOYEE =
+SourceTestStep.newBuilder("EMPLOYEE")
+.addSchema("deptno int", "salary bigint", "name varchar")
+.addOption("filterable-fields", "salary")
+.producedBeforeRestore(
+Row.of(null, 101L, "Adam"),
+Row.of(1, 1L, "Baker"),
+Row.of(2, 2L, "Charlie"),
+Row.of(3, 2L, "Don"),
+Row.of(7, 6L, "Victor"))
+.producedAfterRestore(Row.of(4, 3L, "Juliet"))
+.build();
+
+static final SourceTestStep DEPARTMENT =
+SourceTestStep.newBuilder("DEPARTMENT")
+.addSchema(
+"department_num int", "b2 bigint", "b3 int", 
"department_name varchar")
+.producedBeforeRestore(
+Row.of(null, 102L, 0, "Accounting"),
+Row.of(1, 1L, 0, "Research"),
+Row.of(2, 2L, 1, "Human Resources"),
+Row.of(2, 3L, 2, "HR"),
+

Re: [PR] [FLINK-33470] Implement restore tests for Join node [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23680:
URL: https://github.com/apache/flink/pull/23680#discussion_r1406606039


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java:
##
@@ -280,6 +280,15 @@ public static void registerLocalRawResultsObserver(
 TestValuesRuntimeFunctions.registerLocalRawResultsObserver(tableName, 
observer);
 }
 
+/**
+ * Removes observers for a table.
+ *
+ * @param tableName the table name of the registered table sink.
+ */
+public static void unregisterLocalRawResultsObserver(String tableName) {

Review Comment:
   I like it; I changed this name throughout. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33470] Implement restore tests for Join node [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23680:
URL: https://github.com/apache/flink/pull/23680#discussion_r1406604611


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/RestoreTestBase.java:
##
@@ -181,12 +186,25 @@ public void generateTestSetupFiles(TableTestProgram 
program) throws Exception {
 CommonTestUtils.waitForJobStatus(jobClient, 
Collections.singletonList(JobStatus.FINISHED));
 final Path savepointPath = Paths.get(new URI(savepoint));
 final Path savepointDirPath = getSavepointPath(program, 
getLatestMetadata());
-Files.createDirectories(savepointDirPath);
+// Delete directory savepointDirPath if it already exists
+if (Files.exists(savepointDirPath)) {
+
Files.walk(savepointDirPath).map(Path::toFile).forEach(java.io.File::delete);
+} else {
+Files.createDirectories(savepointDirPath);
+}

Review Comment:
   Removed!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33470] Implement restore tests for Join node [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23680:
URL: https://github.com/apache/flink/pull/23680#discussion_r1406605453


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/RestoreTestBase.java:
##
@@ -224,7 +242,9 @@ void testRestore(TableTestProgram program, ExecNodeMetadata 
metadata) throws Exc
 tEnv.loadPlan(PlanReference.fromFile(getPlanPath(program, 
metadata)));
 compiledPlan.execute().await();
 for (SinkTestStep sinkTestStep : program.getSetupSinkTestSteps()) {
-
assertThat(TestValuesTableFactory.getRawResultsAsStrings(sinkTestStep.name))
+// TODO: Fix this.  If the records after a restore cause 
retractions,
+// this approach will not work.
+
assertThat(TestValuesTableFactory.getResultsAsStrings(sinkTestStep.name))

Review Comment:
   Ok, I made the sinkTestStep configurable.  Hopefully that is a reasonable 
approach.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33470] Implement restore tests for Join node [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23680:
URL: https://github.com/apache/flink/pull/23680#discussion_r1406603980


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/JoinTestPrograms.java:
##
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.testutils;
+
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecJoin;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+import org.apache.flink.types.RowKind;
+
+/** {@link TableTestProgram} definitions for testing {@link StreamExecJoin}. */
+public class JoinTestPrograms {
+
+static final TableTestProgram NON_WINDOW_INNER_JOIN;
+static final TableTestProgram NON_WINDOW_INNER_JOIN_WITH_NULL;
+static final TableTestProgram JOIN;
+static final TableTestProgram INNER_JOIN;
+static final TableTestProgram JOIN_WITH_FILTER;
+static final TableTestProgram INNER_JOIN_WITH_DUPLICATE_KEY;
+static final TableTestProgram INNER_JOIN_WITH_NON_EQUI_JOIN;
+static final TableTestProgram INNER_JOIN_WITH_EQUAL_PK;
+static final TableTestProgram INNER_JOIN_WITH_PK;
+
+static final SourceTestStep SOURCE_A =
+SourceTestStep.newBuilder("A")
+.addSchema("a1 int", "a2 bigint", "a3 varchar")
+.producedBeforeRestore(
+Row.of(1, 1L, "Hi"),
+Row.of(2, 2L, "Hello"),
+Row.of(3, 2L, "Hello world"))
+.producedAfterRestore(Row.of(4, 3L, "Hello there"))
+.build();
+
+static final SourceTestStep SOURCE_B =
+SourceTestStep.newBuilder("B")
+.addSchema("b1 int", "b2 bigint", "b3 int", "b4 varchar", 
"b5 bigint")
+.producedBeforeRestore(
+Row.of(1, 1L, 0, "Hallo", 1L),
+Row.of(2, 2L, 1, "Hallo Welt", 2L),
+Row.of(2, 3L, 2, "Hallo Welt wie", 1L),
+Row.of(3, 1L, 2, "Hallo Welt wie gehts", 1L))
+.producedAfterRestore(Row.of(2, 4L, 3, "Hallo Welt wie 
gehts", 4L))
+.build();
+static final SourceTestStep SOURCE_T1 =
+SourceTestStep.newBuilder("T1")
+.addSchema("a int", "b bigint", "c varchar")
+.producedBeforeRestore(
+Row.of(1, 1L, "Hi1"),
+Row.of(1, 2L, "Hi2"),
+Row.of(1, 2L, "Hi2"),
+Row.of(1, 5L, "Hi3"),
+Row.of(2, 7L, "Hi5"),
+Row.of(1, 9L, "Hi6"),
+Row.of(1, 8L, "Hi8"),
+Row.of(3, 8L, "Hi9"))
+.producedAfterRestore(Row.of(1, 1L, "PostRestore"))
+.build();
+static final SourceTestStep SOURCE_T2 =
+SourceTestStep.newBuilder("T2")
+.addSchema("a int", "b bigint", "c varchar")
+.producedBeforeRestore(
+Row.of(1, 1L, "HiHi"), Row.of(2, 2L, "HeHe"), 
Row.of(3, 2L, "HeHe"))
+.producedAfterRestore(Row.of(2, 1L, "PostRestoreRight"))
+.build();
+
+static {
+NON_WINDOW_INNER_JOIN =
+TableTestProgram.of("non-window-inner-join", "test non-window 
inner join")
+.setupTableSource(SOURCE_T1)
+.setupTableSource(SOURCE_T2)
+.setupTableSink(
+SinkTestStep.newBuilder("MySink")
+.addSchema("a int", "c1 varchar", "c2 
varchar")
+.consumedBeforeRestore(
+Row.of(1, "HiHi", "Hi2"),
+Row.of(1, 

Re: [PR] [FLINK-33470] Implement restore tests for Join node [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23680:
URL: https://github.com/apache/flink/pull/23680#discussion_r1406603182


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/JoinTestPrograms.java:
##
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.testutils;
+
+import org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecJoin;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+import org.apache.flink.types.RowKind;
+
+/** {@link TableTestProgram} definitions for testing {@link StreamExecJoin}. */
+public class JoinTestPrograms {
+
+static final TableTestProgram NON_WINDOW_INNER_JOIN;
+static final TableTestProgram NON_WINDOW_INNER_JOIN_WITH_NULL;
+static final TableTestProgram JOIN;
+static final TableTestProgram INNER_JOIN;
+static final TableTestProgram JOIN_WITH_FILTER;
+static final TableTestProgram INNER_JOIN_WITH_DUPLICATE_KEY;
+static final TableTestProgram INNER_JOIN_WITH_NON_EQUI_JOIN;
+static final TableTestProgram INNER_JOIN_WITH_EQUAL_PK;
+static final TableTestProgram INNER_JOIN_WITH_PK;
+
+static final SourceTestStep SOURCE_A =
+SourceTestStep.newBuilder("A")
+.addSchema("a1 int", "a2 bigint", "a3 varchar")
+.producedBeforeRestore(
+Row.of(1, 1L, "Hi"),
+Row.of(2, 2L, "Hello"),
+Row.of(3, 2L, "Hello world"))
+.producedAfterRestore(Row.of(4, 3L, "Hello there"))
+.build();
+
+static final SourceTestStep SOURCE_B =
+SourceTestStep.newBuilder("B")
+.addSchema("b1 int", "b2 bigint", "b3 int", "b4 varchar", 
"b5 bigint")
+.producedBeforeRestore(
+Row.of(1, 1L, 0, "Hallo", 1L),
+Row.of(2, 2L, 1, "Hallo Welt", 2L),
+Row.of(2, 3L, 2, "Hallo Welt wie", 1L),
+Row.of(3, 1L, 2, "Hallo Welt wie gehts", 1L))
+.producedAfterRestore(Row.of(2, 4L, 3, "Hallo Welt wie 
gehts", 4L))
+.build();
+static final SourceTestStep SOURCE_T1 =
+SourceTestStep.newBuilder("T1")
+.addSchema("a int", "b bigint", "c varchar")
+.producedBeforeRestore(
+Row.of(1, 1L, "Hi1"),
+Row.of(1, 2L, "Hi2"),
+Row.of(1, 2L, "Hi2"),
+Row.of(1, 5L, "Hi3"),
+Row.of(2, 7L, "Hi5"),
+Row.of(1, 9L, "Hi6"),
+Row.of(1, 8L, "Hi8"),
+Row.of(3, 8L, "Hi9"))
+.producedAfterRestore(Row.of(1, 1L, "PostRestore"))
+.build();
+static final SourceTestStep SOURCE_T2 =
+SourceTestStep.newBuilder("T2")
+.addSchema("a int", "b bigint", "c varchar")
+.producedBeforeRestore(
+Row.of(1, 1L, "HiHi"), Row.of(2, 2L, "HeHe"), 
Row.of(3, 2L, "HeHe"))
+.producedAfterRestore(Row.of(2, 1L, "PostRestoreRight"))
+.build();
+
+static {
+NON_WINDOW_INNER_JOIN =
+TableTestProgram.of("non-window-inner-join", "test non-window 
inner join")
+.setupTableSource(SOURCE_T1)
+.setupTableSource(SOURCE_T2)
+.setupTableSink(
+SinkTestStep.newBuilder("MySink")
+.addSchema("a int", "c1 varchar", "c2 
varchar")
+.consumedBeforeRestore(
+Row.of(1, "HiHi", "Hi2"),
+Row.of(1, 

Re: [PR] [FLINK-33664][ci] Setup cron build for java 21 [flink]

2023-11-27 Thread via GitHub


snuyanzin commented on PR #23813:
URL: https://github.com/apache/flink/pull/23813#issuecomment-1828227753

   @zentol , @XComp this is about enabling java 21 in cron builds
   since you 've participated in similar activity for java 17 could you please 
have a look here ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-27 Thread huiyang chi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790153#comment-17790153
 ] 

huiyang chi edited comment on FLINK-33443 at 11/27/23 4:03 PM:
---

Okay! have a good day! :D


was (Author: JIRAUSER303032):
Okay! have a good day

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-27 Thread huiyang chi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790153#comment-17790153
 ] 

huiyang chi commented on FLINK-33443:
-

Okay! have a good day

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-27 Thread huiyang chi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huiyang chi updated FLINK-33443:

Attachment: (was: image-2023-11-27-09-36-03-081.png)

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-27 Thread huiyang chi (Jira)


[ https://issues.apache.org/jira/browse/FLINK-33443 ]


huiyang chi deleted comment on FLINK-33443:
-

was (Author: JIRAUSER303032):
Oh as this example:



!image-2023-11-27-09-36-03-081.png!

 

The direct producer might be the just simple change of the Map or the 
environments (different JDK have different implementation). And except that 
there are other kinds of reason we can detect too!  
https://www.cs.cornell.edu/~legunsen/pubs/GyoriETAL16NonDexToolDemo.pdf

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
> Attachments: image-2023-11-27-09-36-03-081.png
>
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Add GPG key for 1.17.2 release [flink-docker]

2023-11-27 Thread via GitHub


Myasuka opened a new pull request, #167:
URL: https://github.com/apache/flink-docker/pull/167

   The GPG key 2E0E1AB5D39D55E608071FB9F795C02A4D2482B3 can by found 
[here](https://dist.apache.org/repos/dist/release/flink/KEYS) with Yun Tang 
singed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Update Dockerfiles for 1.17.2 release [flink-docker]

2023-11-27 Thread via GitHub


Myasuka opened a new pull request, #166:
URL: https://github.com/apache/flink-docker/pull/166

   The GPG key `2E0E1AB5D39D55E608071FB9F795C02A4D2482B3` can by found 
[here](https://dist.apache.org/repos/dist/release/flink/KEYS) with `Yun Tang` 
singed. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33621) Table Store with PyFlink

2023-11-27 Thread Mehmet Aktas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790147#comment-17790147
 ] 

Mehmet Aktas commented on FLINK-33621:
--

Hi [~lzljs3620320],

Thank you for your comment. The pointers you shared are helpful.

I realized that I should re-phrase my question as follows: Is there a way to 
set up `Table Store/Paimon` with the `MiniCluster` managed by PyFlink? We need 
this to run Flink jobs locally with Paimon.

> Table Store with PyFlink
> 
>
> Key: FLINK-33621
> URL: https://issues.apache.org/jira/browse/FLINK-33621
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Documentation, Table Store
>Reporter: Mehmet Aktas
>Priority: Major
>  Labels: documentation
>
> We are working on a project that requires setting up [Table 
> Store|https://nightlies.apache.org/flink/flink-table-store-docs-stable/] with 
> PyFlink. However, we could not find any documentation on (1) if this is 
> possible, (2) if possible, how to set it up. We could find only the links 
> through [https://nightlies.apache.org/flink/flink-table-store-docs-stable/,] 
> which does not seem to address PyFlink.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-27 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790146#comment-17790146
 ] 

Martijn Visser commented on FLINK-33443:


[~hannahchi] I feel like you're not actually responding to my comment, but just 
trying to convince me/the community that we should use your tool. I'm more then 
happy to have a meaningful discussion, but I don't think that's currently 
happening so I'll refrain from commenting, but I will close more tickets if 
they follow the same paradigm as I've seen recently. 

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
> Attachments: image-2023-11-27-09-36-03-081.png
>
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-27 Thread huiyang chi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790136#comment-17790136
 ] 

huiyang chi commented on FLINK-33443:
-

Oh as this example:



!image-2023-11-27-09-36-03-081.png!

 

The direct producer might be the just simple change of the Map or the 
environments (different JDK have different implementation). And except that 
there are other kinds of reason we can detect too!  
https://www.cs.cornell.edu/~legunsen/pubs/GyoriETAL16NonDexToolDemo.pdf

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
> Attachments: image-2023-11-27-09-36-03-081.png
>
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-27 Thread huiyang chi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huiyang chi updated FLINK-33443:

Attachment: image-2023-11-27-09-36-03-081.png

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
> Attachments: image-2023-11-27-09-36-03-081.png
>
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-27 Thread huiyang chi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789859#comment-17789859
 ] 

huiyang chi edited comment on FLINK-33443 at 11/27/23 3:35 PM:
---

Hi Martijn : I can understand that currently this problem is not exposed in the 
CI, because this kind of 80% problem we detected exposed mainly because the 
change of the test environment (and more dangerous for lower-version JDK), 
while for the CI the test environment won't change. In detail, ~50% of problem 
we address is because the non-deterministic of the Hash (Set and Map), this is 
one example Apache repo 
[wicket|https://github.com/apache/wicket/commit/ed64e166dcba6715eafcbb7ca460d2b87e84cffc]
 had encountered and addressed, this is kind of problem resolved for the 
long-term gain :) . Here is a list of this kind of this problem : 
https://github.com/TestingResearchIllinois/idoft


was (Author: JIRAUSER303032):
Hi Martijn : I can understand that currently this problem is not exposed in the 
CI, because this kind of 80% problem we detected exposed mainly because the 
change of the test environment (and more dangerous for lower-version JDK), 
while for the CI the test environment won't change. In detail, ~50% of problem 
we address is because the non-deterministic of the Hash (Set and Map), this is 
one example Apache repo 
[wicket|https://github.com/apache/wicket/commit/ed64e166dcba6715eafcbb7ca460d2b87e84cffc]
 had encountered and addressed, this is kind of problem resolved for the 
long-term gain :) . I also have some tests detected fixed: 
https://github.com/MyEnthusiastic/flink/pulls

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33635) Some connectors can not compile in 1.19-SNAPSHOT

2023-11-27 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790133#comment-17790133
 ] 

Martijn Visser edited comment on FLINK-33635 at 11/27/23 3:32 PM:
--

One note from my end: I think it's not unreasonable that the {{main}} branch of 
an externalized connector could be failing because something has changed on the 
Flink side. We will have exactly the same situation if something is changed as 
part of Flink 2.0, and it will require work for a connector maintainer at that 
point. That being said, that's not the case for this specific situation (since 
the discussion should have been brought back first to the Dev mailing list)

In a normal situation, this wouldn't have been a bug, but just a Jira ticket to 
make Connector X compatible with Flink 1.x.y


was (Author: martijnvisser):
One note from my end: I think it's not unreasonable that the {{main}} branch of 
an externalized connector could be failing because something has changed on the 
Flink side. We will have exactly the same situation if something is changed as 
part of Flink 2.0, and it will require work for a connector maintainer at that 
point. That being said, that's not the case for this specific situation (since 
the discussion should have been brought back first to the Dev mailing list)

> Some connectors can not compile in 1.19-SNAPSHOT
> 
>
> Key: FLINK-33635
> URL: https://issues.apache.org/jira/browse/FLINK-33635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.19.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
> Fix For: 1.19.0
>
>
> The sink API compatibility was broken in FLINK-25857. 
> org.apache.flink.api.connector.sink2.Sink#createWriter(InitContext) was 
> changed to 
> org.apache.flink.api.connector.sink2.Sink#createWriter(WriterInitContext).
> All external connectors sink can not compile as this change.
> For example:
> es: 
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6976181890/job/18984287421
> aws: 
> https://github.com/apache/flink-connector-aws/actions/runs/6975253086/job/18982104160



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32353] [hotfix] DO NOT MERGE: build against new Flink version [flink-connector-cassandra]

2023-11-27 Thread via GitHub


echauchot commented on PR #21:
URL: 
https://github.com/apache/flink-connector-cassandra/pull/21#issuecomment-1828063833

   To pass, it requires to release flink-connector-parent:1.0.1 with the 
ability to skip archunit tests (for flink 1.17.1)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33635) Some connectors can not compile in 1.19-SNAPSHOT

2023-11-27 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790133#comment-17790133
 ] 

Martijn Visser commented on FLINK-33635:


One note from my end: I think it's not unreasonable that the {{main}} branch of 
an externalized connector could be failing because something has changed on the 
Flink side. We will have exactly the same situation if something is changed as 
part of Flink 2.0, and it will require work for a connector maintainer at that 
point. That being said, that's not the case for this specific situation (since 
the discussion should have been brought back first to the Dev mailing list)

> Some connectors can not compile in 1.19-SNAPSHOT
> 
>
> Key: FLINK-33635
> URL: https://issues.apache.org/jira/browse/FLINK-33635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.19.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
> Fix For: 1.19.0
>
>
> The sink API compatibility was broken in FLINK-25857. 
> org.apache.flink.api.connector.sink2.Sink#createWriter(InitContext) was 
> changed to 
> org.apache.flink.api.connector.sink2.Sink#createWriter(WriterInitContext).
> All external connectors sink can not compile as this change.
> For example:
> es: 
> https://github.com/apache/flink-connector-elasticsearch/actions/runs/6976181890/job/18984287421
> aws: 
> https://github.com/apache/flink-connector-aws/actions/runs/6975253086/job/18982104160



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33652) First Steps documentation is having empty page link

2023-11-27 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-33652:
---
Component/s: Documentation

> First Steps documentation is having empty page link
> ---
>
> Key: FLINK-33652
> URL: https://issues.apache.org/jira/browse/FLINK-33652
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
> Environment: Web
>Reporter: Pranav Sharma
>Priority: Minor
> Attachments: image-2023-11-26-15-23-02-007.png, 
> image-2023-11-26-15-25-04-708.png
>
>
>  
> Under this page URL 
> [link|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/try-flink/local_installation/],
>  under "Summary" heading, the "concepts" link is pointing to an empty page 
> [link_on_concepts|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/concepts/].
>  Upon visiting, the tab heading contains HTML as well. (Attached screenshots)
> It may be pointed to concepts/overview instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33666) MergeTableLikeUtil uses different constraint name than Schema

2023-11-27 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-33666:
-
Affects Version/s: 1.18.0

> MergeTableLikeUtil uses different constraint name than Schema
> -
>
> Key: FLINK-33666
> URL: https://issues.apache.org/jira/browse/FLINK-33666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Timo Walther
>Priority: Major
>
> {{MergeTableLikeUtil}} uses a different algorithm to name constraints than 
> {{Schema}}. 
> {{Schema}} includes the column names.
> {{MergeTableLikeUtil}} uses a hashCode which means it might depend on JVM 
> specifics.
> For consistency we should use the same algorithm. I propose to use 
> {{Schema}}'s logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33666) MergeTableLikeUtil uses different constraint name than Schema

2023-11-27 Thread Timo Walther (Jira)
Timo Walther created FLINK-33666:


 Summary: MergeTableLikeUtil uses different constraint name than 
Schema
 Key: FLINK-33666
 URL: https://issues.apache.org/jira/browse/FLINK-33666
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Timo Walther


{{MergeTableLikeUtil}} uses a different algorithm to name constraints than 
{{Schema}}. 

{{Schema}} includes the column names.
{{MergeTableLikeUtil}} uses a hashCode which means it might depend on JVM 
specifics.

For consistency we should use the same algorithm. I propose to use {{Schema}}'s 
logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-26782] Remove PlannerExpression and related [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23761:
URL: https://github.com/apache/flink/pull/23761#discussion_r1406316719


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/LegacyUserDefinedFunctionInference.java:
##
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.TypeExtractor;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.ConstantArgumentCount;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.utils.TypeConversions;
+import org.apache.flink.types.Row;
+
+import org.apache.flink.shaded.guava31.com.google.common.primitives.Primitives;
+
+import java.lang.reflect.Method;
+import java.lang.reflect.Modifier;
+import java.math.BigDecimal;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.Instant;
+import java.time.LocalDateTime;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Optional;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+import java.util.stream.Stream;
+
+import static 
org.apache.flink.table.types.logical.utils.LogicalTypeUtils.toInternalConversionClass;
+import static 
org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType;
+
+/**
+ * Ported {@code UserDefinedFunctionUtils} to run some of the type inference 
for legacy functions in
+ * Table API.
+ */
+@Internal
+@Deprecated
+public class LegacyUserDefinedFunctionInference {
+
+public static InputTypeStrategy 
getInputTypeStrategy(ImperativeAggregateFunction func) {
+return new InputTypeStrategy() {
+@Override
+public ArgumentCount getArgumentCount() {
+return ConstantArgumentCount.any();
+}
+
+@Override
+public Optional> inferInputTypes(
+CallContext callContext, boolean throwOnFailure) {
+final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+final DataType accType = getAccumulatorType(func);
+final LogicalType[] input =
+Stream.concat(Stream.of(accType), 
argumentDataTypes.stream())
+.map(DataType::getLogicalType)
+.toArray(LogicalType[]::new);
+
+final Optional> foundMethod =
+Stream.of(
+logicalTypesToInternalClasses(input),
+logicalTypesToExternalClasses(input))
+.map(
+signature ->
+getUserDefinedMethod(
+func,
+"accumulate",
+signature,
+input,
+cls ->
+Stream.concat(
+   

Re: [PR] [FLINK-26782] Remove PlannerExpression and related [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23761:
URL: https://github.com/apache/flink/pull/23761#discussion_r1406314626


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/LegacyUserDefinedFunctionInference.java:
##
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.TypeExtractor;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.ConstantArgumentCount;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.utils.TypeConversions;
+import org.apache.flink.types.Row;
+
+import org.apache.flink.shaded.guava31.com.google.common.primitives.Primitives;
+
+import java.lang.reflect.Method;
+import java.lang.reflect.Modifier;
+import java.math.BigDecimal;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.Instant;
+import java.time.LocalDateTime;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Optional;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+import java.util.stream.Stream;
+
+import static 
org.apache.flink.table.types.logical.utils.LogicalTypeUtils.toInternalConversionClass;
+import static 
org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType;
+
+/**
+ * Ported {@code UserDefinedFunctionUtils} to run some of the type inference 
for legacy functions in
+ * Table API.
+ */
+@Internal
+@Deprecated
+public class LegacyUserDefinedFunctionInference {
+
+public static InputTypeStrategy 
getInputTypeStrategy(ImperativeAggregateFunction func) {
+return new InputTypeStrategy() {
+@Override
+public ArgumentCount getArgumentCount() {
+return ConstantArgumentCount.any();
+}
+
+@Override
+public Optional> inferInputTypes(
+CallContext callContext, boolean throwOnFailure) {
+final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+final DataType accType = getAccumulatorType(func);
+final LogicalType[] input =
+Stream.concat(Stream.of(accType), 
argumentDataTypes.stream())
+.map(DataType::getLogicalType)
+.toArray(LogicalType[]::new);
+
+final Optional> foundMethod =
+Stream.of(
+logicalTypesToInternalClasses(input),
+logicalTypesToExternalClasses(input))
+.map(
+signature ->
+getUserDefinedMethod(
+func,
+"accumulate",
+signature,
+input,
+cls ->
+Stream.concat(
+   

[PR] Update Dockerfiles for 1.16.3 release [flink-docker]

2023-11-27 Thread via GitHub


1996fanrui opened a new pull request, #165:
URL: https://github.com/apache/flink-docker/pull/165

   GPG key(`B2D64016B940A7E0B9B72E0D7D0528B28037D8BC`) can be found from 1.16.3 
vote mail and KEYS[2].
   
   [1] https://lists.apache.org/thread/dxfmt3v5n0xv5r9tjl30ob5d7y5t7pw3
   [2] https://dist.apache.org/repos/dist/release/flink/KEYS
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26782] Remove PlannerExpression and related [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23761:
URL: https://github.com/apache/flink/pull/23761#discussion_r1406305482


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/LegacyUserDefinedFunctionInference.java:
##
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.TypeExtractor;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.ConstantArgumentCount;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.utils.TypeConversions;
+import org.apache.flink.types.Row;
+
+import org.apache.flink.shaded.guava31.com.google.common.primitives.Primitives;
+
+import java.lang.reflect.Method;
+import java.lang.reflect.Modifier;
+import java.math.BigDecimal;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.Instant;
+import java.time.LocalDateTime;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Optional;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+import java.util.stream.Stream;
+
+import static 
org.apache.flink.table.types.logical.utils.LogicalTypeUtils.toInternalConversionClass;
+import static 
org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType;
+
+/**
+ * Ported {@code UserDefinedFunctionUtils} to run some of the type inference 
for legacy functions in
+ * Table API.
+ */
+@Internal
+@Deprecated
+public class LegacyUserDefinedFunctionInference {
+
+public static InputTypeStrategy 
getInputTypeStrategy(ImperativeAggregateFunction func) {
+return new InputTypeStrategy() {
+@Override
+public ArgumentCount getArgumentCount() {
+return ConstantArgumentCount.any();
+}
+
+@Override
+public Optional> inferInputTypes(
+CallContext callContext, boolean throwOnFailure) {
+final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+final DataType accType = getAccumulatorType(func);
+final LogicalType[] input =
+Stream.concat(Stream.of(accType), 
argumentDataTypes.stream())
+.map(DataType::getLogicalType)
+.toArray(LogicalType[]::new);
+
+final Optional> foundMethod =
+Stream.of(
+logicalTypesToInternalClasses(input),
+logicalTypesToExternalClasses(input))
+.map(
+signature ->
+getUserDefinedMethod(
+func,
+"accumulate",
+signature,
+input,
+cls ->
+Stream.concat(
+   

Re: [PR] [FLINK-26782] Remove PlannerExpression and related [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23761:
URL: https://github.com/apache/flink/pull/23761#discussion_r1406291625


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/LegacyUserDefinedFunctionInference.java:
##
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.TypeExtractor;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.ConstantArgumentCount;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.utils.TypeConversions;
+import org.apache.flink.types.Row;
+
+import org.apache.flink.shaded.guava31.com.google.common.primitives.Primitives;
+
+import java.lang.reflect.Method;
+import java.lang.reflect.Modifier;
+import java.math.BigDecimal;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.Instant;
+import java.time.LocalDateTime;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Optional;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+import java.util.stream.Stream;
+
+import static 
org.apache.flink.table.types.logical.utils.LogicalTypeUtils.toInternalConversionClass;
+import static 
org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType;
+
+/**
+ * Ported {@code UserDefinedFunctionUtils} to run some of the type inference 
for legacy functions in
+ * Table API.
+ */
+@Internal
+@Deprecated
+public class LegacyUserDefinedFunctionInference {
+
+public static InputTypeStrategy 
getInputTypeStrategy(ImperativeAggregateFunction func) {
+return new InputTypeStrategy() {
+@Override
+public ArgumentCount getArgumentCount() {
+return ConstantArgumentCount.any();
+}
+
+@Override
+public Optional> inferInputTypes(
+CallContext callContext, boolean throwOnFailure) {
+final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+final DataType accType = getAccumulatorType(func);
+final LogicalType[] input =
+Stream.concat(Stream.of(accType), 
argumentDataTypes.stream())
+.map(DataType::getLogicalType)
+.toArray(LogicalType[]::new);
+
+final Optional> foundMethod =
+Stream.of(
+logicalTypesToInternalClasses(input),
+logicalTypesToExternalClasses(input))
+.map(
+signature ->
+getUserDefinedMethod(
+func,
+"accumulate",
+signature,
+input,
+cls ->
+Stream.concat(
+   

Re: [PR] [FLINK-26782] Remove PlannerExpression and related [flink]

2023-11-27 Thread via GitHub


jnh5y commented on code in PR #23761:
URL: https://github.com/apache/flink/pull/23761#discussion_r1406286431


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/LegacyUserDefinedFunctionInference.java:
##
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.functions.InvalidTypesException;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.TypeExtractor;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.ConstantArgumentCount;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.TypeStrategy;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.utils.TypeConversions;
+import org.apache.flink.types.Row;
+
+import org.apache.flink.shaded.guava31.com.google.common.primitives.Primitives;
+
+import java.lang.reflect.Method;
+import java.lang.reflect.Modifier;
+import java.math.BigDecimal;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.time.Instant;
+import java.time.LocalDateTime;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Optional;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+import java.util.stream.Stream;
+
+import static 
org.apache.flink.table.types.logical.utils.LogicalTypeUtils.toInternalConversionClass;
+import static 
org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType;
+
+/**
+ * Ported {@code UserDefinedFunctionUtils} to run some of the type inference 
for legacy functions in
+ * Table API.
+ */
+@Internal
+@Deprecated
+public class LegacyUserDefinedFunctionInference {
+
+public static InputTypeStrategy 
getInputTypeStrategy(ImperativeAggregateFunction func) {
+return new InputTypeStrategy() {
+@Override
+public ArgumentCount getArgumentCount() {
+return ConstantArgumentCount.any();
+}
+
+@Override
+public Optional> inferInputTypes(
+CallContext callContext, boolean throwOnFailure) {
+final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+final DataType accType = getAccumulatorType(func);
+final LogicalType[] input =
+Stream.concat(Stream.of(accType), 
argumentDataTypes.stream())
+.map(DataType::getLogicalType)
+.toArray(LogicalType[]::new);
+
+final Optional> foundMethod =
+Stream.of(
+logicalTypesToInternalClasses(input),
+logicalTypesToExternalClasses(input))
+.map(
+signature ->
+getUserDefinedMethod(
+func,
+"accumulate",
+signature,
+input,
+cls ->
+Stream.concat(
+   

[PR] Add GPG key for 1.16.3 release [flink-docker]

2023-11-27 Thread via GitHub


1996fanrui opened a new pull request, #164:
URL: https://github.com/apache/flink-docker/pull/164

   GPG key  can be found from 1.16.3 vote mail and KEYS[2]
   
   [1]  https://lists.apache.org/thread/dxfmt3v5n0xv5r9tjl30ob5d7y5t7pw3
   [2] https://dist.apache.org/repos/dist/release/flink/KEYS
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33587][docs] Tidy up docs around JDBC [flink]

2023-11-27 Thread via GitHub


alpinegizmo merged PR #23743:
URL: https://github.com/apache/flink/pull/23743


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33617) Upgrade tcnative for flink-shaded

2023-11-27 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17790103#comment-17790103
 ] 

Sergey Nuyanzin commented on FLINK-33617:
-

Merged to master as 
[a192a0d1b4fa295c4786d1959c1bdb54796c4a41|https://github.com/apache/flink-shaded/commit/a192a0d1b4fa295c4786d1959c1bdb54796c4a41]

> Upgrade tcnative for flink-shaded
> -
>
> Key: FLINK-33617
> URL: https://issues.apache.org/jira/browse/FLINK-33617
> Project: Flink
>  Issue Type: Technical Debt
>  Components: BuildSystem / Shaded
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33617) Upgrade tcnative for flink-shaded

2023-11-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33617.
-
Fix Version/s: shaded-18.0
   Resolution: Fixed

> Upgrade tcnative for flink-shaded
> -
>
> Key: FLINK-33617
> URL: https://issues.apache.org/jira/browse/FLINK-33617
> Project: Flink
>  Issue Type: Technical Debt
>  Components: BuildSystem / Shaded
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33617][BuildSystem / Shaded] Upgrade netty-tcnative to 2.0.62.Final [flink-shaded]

2023-11-27 Thread via GitHub


snuyanzin merged PR #129:
URL: https://github.com/apache/flink-shaded/pull/129


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33617) Upgrade tcnative for flink-shaded

2023-11-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33617.
---

> Upgrade tcnative for flink-shaded
> -
>
> Key: FLINK-33617
> URL: https://issues.apache.org/jira/browse/FLINK-33617
> Project: Flink
>  Issue Type: Technical Debt
>  Components: BuildSystem / Shaded
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33340) Bump Jackson to 2.15.3

2023-11-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33340:

Component/s: BuildSystem / Shaded

> Bump Jackson to 2.15.3
> --
>
> Key: FLINK-33340
> URL: https://issues.apache.org/jira/browse/FLINK-33340
> Project: Flink
>  Issue Type: Technical Debt
>  Components: BuildSystem / Shaded
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0, 1.19.0
>
>
> Among others there is a number of improvements regarding parsing of numbers 
> (jackson-core)
> https://github.com/FasterXML/jackson-core/blob/2.16/release-notes/VERSION-2.x



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33330) Bump zookeeper to address CVE-2023-44981

2023-11-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-0:

Component/s: BuildSystem / Shaded

> Bump zookeeper  to address CVE-2023-44981
> -
>
> Key: FLINK-0
> URL: https://issues.apache.org/jira/browse/FLINK-0
> Project: Flink
>  Issue Type: Technical Debt
>  Components: BuildSystem / Shaded
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0
>
>
> There is a 
> [CVE-2023-44981|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-44981]
>  which is fixed in 3.7.2, 3.8.3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33618) Upgrade swagger for flink-shaded to 2.2.19

2023-11-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33618.
---

> Upgrade swagger for flink-shaded to 2.2.19
> --
>
> Key: FLINK-33618
> URL: https://issues.apache.org/jira/browse/FLINK-33618
> Project: Flink
>  Issue Type: Technical Debt
>  Components: BuildSystem / Shaded
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33330) Bump zookeeper to address CVE-2023-44981

2023-11-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-0.
---

> Bump zookeeper  to address CVE-2023-44981
> -
>
> Key: FLINK-0
> URL: https://issues.apache.org/jira/browse/FLINK-0
> Project: Flink
>  Issue Type: Technical Debt
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0
>
>
> There is a 
> [CVE-2023-44981|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-44981]
>  which is fixed in 3.7.2, 3.8.3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33340) Bump Jackson to 2.15.3

2023-11-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33340.
---

> Bump Jackson to 2.15.3
> --
>
> Key: FLINK-33340
> URL: https://issues.apache.org/jira/browse/FLINK-33340
> Project: Flink
>  Issue Type: Technical Debt
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0, 1.19.0
>
>
> Among others there is a number of improvements regarding parsing of numbers 
> (jackson-core)
> https://github.com/FasterXML/jackson-core/blob/2.16/release-notes/VERSION-2.x



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] FLINK-33600][table] print the query time cost for batch query in cli [flink]

2023-11-27 Thread via GitHub


JingGe commented on PR #23809:
URL: https://github.com/apache/flink/pull/23809#issuecomment-1827956246

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33618) Upgrade swagger for flink-shaded to 2.2.19

2023-11-27 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33618.
-
Fix Version/s: shaded-18.0
   Resolution: Fixed

> Upgrade swagger for flink-shaded to 2.2.19
> --
>
> Key: FLINK-33618
> URL: https://issues.apache.org/jira/browse/FLINK-33618
> Project: Flink
>  Issue Type: Technical Debt
>  Components: BuildSystem / Shaded
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: shaded-18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33618][BuildSystem / Shaded] Upgrade swagger to 2.2.19 [flink-shaded]

2023-11-27 Thread via GitHub


snuyanzin merged PR #130:
URL: https://github.com/apache/flink-shaded/pull/130


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] FLINK-33600][table] print the query time cost for batch query in cli [flink]

2023-11-27 Thread via GitHub


JingGe commented on PR #23809:
URL: https://github.com/apache/flink/pull/23809#issuecomment-1827902974

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Support Flink 1.17.1 [flink-statefun]

2023-11-27 Thread via GitHub


MartijnVisser commented on PR #332:
URL: https://github.com/apache/flink-statefun/pull/332#issuecomment-1827902004

   @gfmio For context, I've made 
https://github.com/MartijnVisser/flink-statefun/pull/3 which would probably be 
sufficient


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >